Dec 01 19:51:56 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Dec 01 19:51:56 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 01 19:51:56 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 01 19:51:56 localhost kernel: BIOS-provided physical RAM map:
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 01 19:51:56 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 01 19:51:56 localhost kernel: NX (Execute Disable) protection: active
Dec 01 19:51:56 localhost kernel: APIC: Static calls initialized
Dec 01 19:51:56 localhost kernel: SMBIOS 2.8 present.
Dec 01 19:51:56 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 01 19:51:56 localhost kernel: Hypervisor detected: KVM
Dec 01 19:51:56 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 01 19:51:56 localhost kernel: kvm-clock: using sched offset of 3311789250 cycles
Dec 01 19:51:56 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 01 19:51:56 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 01 19:51:56 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 01 19:51:56 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 01 19:51:56 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 01 19:51:56 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 01 19:51:56 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 01 19:51:56 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 01 19:51:56 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 01 19:51:56 localhost kernel: Using GB pages for direct mapping
Dec 01 19:51:56 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Dec 01 19:51:56 localhost kernel: ACPI: Early table checksum verification disabled
Dec 01 19:51:56 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 01 19:51:56 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 01 19:51:56 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 01 19:51:56 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 01 19:51:56 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 01 19:51:56 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 01 19:51:56 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 01 19:51:56 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 01 19:51:56 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 01 19:51:56 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 01 19:51:56 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 01 19:51:56 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 01 19:51:56 localhost kernel: No NUMA configuration found
Dec 01 19:51:56 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 01 19:51:56 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec 01 19:51:56 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 01 19:51:56 localhost kernel: Zone ranges:
Dec 01 19:51:56 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 01 19:51:56 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 01 19:51:56 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 01 19:51:56 localhost kernel:   Device   empty
Dec 01 19:51:56 localhost kernel: Movable zone start for each node
Dec 01 19:51:56 localhost kernel: Early memory node ranges
Dec 01 19:51:56 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 01 19:51:56 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 01 19:51:56 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 01 19:51:56 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 01 19:51:56 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 01 19:51:56 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 01 19:51:56 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 01 19:51:56 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 01 19:51:56 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 01 19:51:56 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 01 19:51:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 01 19:51:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 01 19:51:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 01 19:51:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 01 19:51:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 01 19:51:56 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 01 19:51:56 localhost kernel: TSC deadline timer available
Dec 01 19:51:56 localhost kernel: CPU topo: Max. logical packages:   8
Dec 01 19:51:56 localhost kernel: CPU topo: Max. logical dies:       8
Dec 01 19:51:56 localhost kernel: CPU topo: Max. dies per package:   1
Dec 01 19:51:56 localhost kernel: CPU topo: Max. threads per core:   1
Dec 01 19:51:56 localhost kernel: CPU topo: Num. cores per package:     1
Dec 01 19:51:56 localhost kernel: CPU topo: Num. threads per package:   1
Dec 01 19:51:56 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 01 19:51:56 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 01 19:51:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 01 19:51:56 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 01 19:51:56 localhost kernel: Booting paravirtualized kernel on KVM
Dec 01 19:51:56 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 01 19:51:56 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 01 19:51:56 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 01 19:51:56 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 01 19:51:56 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 01 19:51:56 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 01 19:51:56 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 01 19:51:56 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Dec 01 19:51:56 localhost kernel: random: crng init done
Dec 01 19:51:56 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 01 19:51:56 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 01 19:51:56 localhost kernel: Fallback order for Node 0: 0 
Dec 01 19:51:56 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 01 19:51:56 localhost kernel: Policy zone: Normal
Dec 01 19:51:56 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 01 19:51:56 localhost kernel: software IO TLB: area num 8.
Dec 01 19:51:56 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 01 19:51:56 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Dec 01 19:51:56 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 01 19:51:56 localhost kernel: Dynamic Preempt: voluntary
Dec 01 19:51:56 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 01 19:51:56 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 01 19:51:56 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 01 19:51:56 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 01 19:51:56 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 01 19:51:56 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 01 19:51:56 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 01 19:51:56 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 01 19:51:56 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 01 19:51:56 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 01 19:51:56 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 01 19:51:56 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 01 19:51:56 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 01 19:51:56 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 01 19:51:56 localhost kernel: Console: colour VGA+ 80x25
Dec 01 19:51:56 localhost kernel: printk: console [ttyS0] enabled
Dec 01 19:51:56 localhost kernel: ACPI: Core revision 20230331
Dec 01 19:51:56 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 01 19:51:56 localhost kernel: x2apic enabled
Dec 01 19:51:56 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 01 19:51:56 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 01 19:51:56 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 01 19:51:56 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 01 19:51:56 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 01 19:51:56 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 01 19:51:56 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 01 19:51:56 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 01 19:51:56 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 01 19:51:56 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 01 19:51:56 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 01 19:51:56 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 01 19:51:56 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 01 19:51:56 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 01 19:51:56 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 01 19:51:56 localhost kernel: x86/bugs: return thunk changed
Dec 01 19:51:56 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 01 19:51:56 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 01 19:51:56 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 01 19:51:56 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 01 19:51:56 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 01 19:51:56 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 01 19:51:56 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 01 19:51:56 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 01 19:51:56 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 01 19:51:56 localhost kernel: landlock: Up and running.
Dec 01 19:51:56 localhost kernel: Yama: becoming mindful.
Dec 01 19:51:56 localhost kernel: SELinux:  Initializing.
Dec 01 19:51:56 localhost kernel: LSM support for eBPF active
Dec 01 19:51:56 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 01 19:51:56 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 01 19:51:56 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 01 19:51:56 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 01 19:51:56 localhost kernel: ... version:                0
Dec 01 19:51:56 localhost kernel: ... bit width:              48
Dec 01 19:51:56 localhost kernel: ... generic registers:      6
Dec 01 19:51:56 localhost kernel: ... value mask:             0000ffffffffffff
Dec 01 19:51:56 localhost kernel: ... max period:             00007fffffffffff
Dec 01 19:51:56 localhost kernel: ... fixed-purpose events:   0
Dec 01 19:51:56 localhost kernel: ... event mask:             000000000000003f
Dec 01 19:51:56 localhost kernel: signal: max sigframe size: 1776
Dec 01 19:51:56 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 01 19:51:56 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 01 19:51:56 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 01 19:51:56 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 01 19:51:56 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 01 19:51:56 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 01 19:51:56 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 01 19:51:56 localhost kernel: node 0 deferred pages initialised in 10ms
Dec 01 19:51:56 localhost kernel: Memory: 7765840K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616280K reserved, 0K cma-reserved)
Dec 01 19:51:56 localhost kernel: devtmpfs: initialized
Dec 01 19:51:56 localhost kernel: x86/mm: Memory block size: 128MB
Dec 01 19:51:56 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 01 19:51:56 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Dec 01 19:51:56 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 01 19:51:56 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 01 19:51:56 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 01 19:51:56 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 01 19:51:56 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 01 19:51:56 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 01 19:51:56 localhost kernel: audit: type=2000 audit(1764618714.069:1): state=initialized audit_enabled=0 res=1
Dec 01 19:51:56 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 01 19:51:56 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 01 19:51:56 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 01 19:51:56 localhost kernel: cpuidle: using governor menu
Dec 01 19:51:56 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 01 19:51:56 localhost kernel: PCI: Using configuration type 1 for base access
Dec 01 19:51:56 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 01 19:51:56 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 01 19:51:56 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 01 19:51:56 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 01 19:51:56 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 01 19:51:56 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 01 19:51:56 localhost kernel: Demotion targets for Node 0: null
Dec 01 19:51:56 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 01 19:51:56 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 01 19:51:56 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 01 19:51:56 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 01 19:51:56 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 01 19:51:56 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 01 19:51:56 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 01 19:51:56 localhost kernel: ACPI: Interpreter enabled
Dec 01 19:51:56 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 01 19:51:56 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 01 19:51:56 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 01 19:51:56 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 01 19:51:56 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 01 19:51:56 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 01 19:51:56 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [3] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [4] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [5] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [6] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [7] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [8] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [9] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [10] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [11] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [12] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [13] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [14] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [15] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [16] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [17] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [18] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [19] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [20] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [21] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [22] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [23] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [24] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [25] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [26] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [27] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [28] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [29] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [30] registered
Dec 01 19:51:56 localhost kernel: acpiphp: Slot [31] registered
Dec 01 19:51:56 localhost kernel: PCI host bridge to bus 0000:00
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 01 19:51:56 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 01 19:51:56 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 01 19:51:56 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 01 19:51:56 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 01 19:51:56 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 01 19:51:56 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 01 19:51:56 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 01 19:51:56 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 01 19:51:56 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 01 19:51:56 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 01 19:51:56 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 01 19:51:56 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 01 19:51:56 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 01 19:51:56 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 01 19:51:56 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 01 19:51:56 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 01 19:51:56 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 01 19:51:56 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 01 19:51:56 localhost kernel: iommu: Default domain type: Translated
Dec 01 19:51:56 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 01 19:51:56 localhost kernel: SCSI subsystem initialized
Dec 01 19:51:56 localhost kernel: ACPI: bus type USB registered
Dec 01 19:51:56 localhost kernel: usbcore: registered new interface driver usbfs
Dec 01 19:51:56 localhost kernel: usbcore: registered new interface driver hub
Dec 01 19:51:56 localhost kernel: usbcore: registered new device driver usb
Dec 01 19:51:56 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 01 19:51:56 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 01 19:51:56 localhost kernel: PTP clock support registered
Dec 01 19:51:56 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 01 19:51:56 localhost kernel: NetLabel: Initializing
Dec 01 19:51:56 localhost kernel: NetLabel:  domain hash size = 128
Dec 01 19:51:56 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 01 19:51:56 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 01 19:51:56 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 01 19:51:56 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 01 19:51:56 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 01 19:51:56 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 01 19:51:56 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 01 19:51:56 localhost kernel: vgaarb: loaded
Dec 01 19:51:56 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 01 19:51:56 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 01 19:51:56 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 01 19:51:56 localhost kernel: pnp: PnP ACPI init
Dec 01 19:51:56 localhost kernel: pnp 00:03: [dma 2]
Dec 01 19:51:56 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 01 19:51:56 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 01 19:51:56 localhost kernel: NET: Registered PF_INET protocol family
Dec 01 19:51:56 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 01 19:51:56 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 01 19:51:56 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 01 19:51:56 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 01 19:51:56 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 01 19:51:56 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 01 19:51:56 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 01 19:51:56 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 01 19:51:56 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 01 19:51:56 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 01 19:51:56 localhost kernel: NET: Registered PF_XDP protocol family
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 01 19:51:56 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 01 19:51:56 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 01 19:51:56 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 01 19:51:56 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 108537 usecs
Dec 01 19:51:56 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 01 19:51:56 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 01 19:51:56 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 01 19:51:56 localhost kernel: ACPI: bus type thunderbolt registered
Dec 01 19:51:56 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 01 19:51:56 localhost kernel: Initialise system trusted keyrings
Dec 01 19:51:56 localhost kernel: Key type blacklist registered
Dec 01 19:51:56 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 01 19:51:56 localhost kernel: zbud: loaded
Dec 01 19:51:56 localhost kernel: integrity: Platform Keyring initialized
Dec 01 19:51:56 localhost kernel: integrity: Machine keyring initialized
Dec 01 19:51:56 localhost kernel: Freeing initrd memory: 85868K
Dec 01 19:51:56 localhost kernel: NET: Registered PF_ALG protocol family
Dec 01 19:51:56 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 01 19:51:56 localhost kernel: Key type asymmetric registered
Dec 01 19:51:56 localhost kernel: Asymmetric key parser 'x509' registered
Dec 01 19:51:56 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 01 19:51:56 localhost kernel: io scheduler mq-deadline registered
Dec 01 19:51:56 localhost kernel: io scheduler kyber registered
Dec 01 19:51:56 localhost kernel: io scheduler bfq registered
Dec 01 19:51:56 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 01 19:51:56 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 01 19:51:56 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 01 19:51:56 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 01 19:51:56 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 01 19:51:56 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 01 19:51:56 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 01 19:51:56 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 01 19:51:56 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 01 19:51:56 localhost kernel: Non-volatile memory driver v1.3
Dec 01 19:51:56 localhost kernel: rdac: device handler registered
Dec 01 19:51:56 localhost kernel: hp_sw: device handler registered
Dec 01 19:51:56 localhost kernel: emc: device handler registered
Dec 01 19:51:56 localhost kernel: alua: device handler registered
Dec 01 19:51:56 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 01 19:51:56 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 01 19:51:56 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 01 19:51:56 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 01 19:51:56 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 01 19:51:56 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 01 19:51:56 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 01 19:51:56 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Dec 01 19:51:56 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 01 19:51:56 localhost kernel: hub 1-0:1.0: USB hub found
Dec 01 19:51:56 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 01 19:51:56 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 01 19:51:56 localhost kernel: usbserial: USB Serial support registered for generic
Dec 01 19:51:56 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 01 19:51:56 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 01 19:51:56 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 01 19:51:56 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 01 19:51:56 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 01 19:51:56 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 01 19:51:56 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 01 19:51:56 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-01T19:51:55 UTC (1764618715)
Dec 01 19:51:56 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 01 19:51:56 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 01 19:51:56 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 01 19:51:56 localhost kernel: usbcore: registered new interface driver usbhid
Dec 01 19:51:56 localhost kernel: usbhid: USB HID core driver
Dec 01 19:51:56 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 01 19:51:56 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 01 19:51:56 localhost kernel: Initializing XFRM netlink socket
Dec 01 19:51:56 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 01 19:51:56 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 01 19:51:56 localhost kernel: Segment Routing with IPv6
Dec 01 19:51:56 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 01 19:51:56 localhost kernel: mpls_gso: MPLS GSO support
Dec 01 19:51:56 localhost kernel: IPI shorthand broadcast: enabled
Dec 01 19:51:56 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 01 19:51:56 localhost kernel: AES CTR mode by8 optimization enabled
Dec 01 19:51:56 localhost kernel: sched_clock: Marking stable (1304007712, 243447103)->(1621481481, -74026666)
Dec 01 19:51:56 localhost kernel: registered taskstats version 1
Dec 01 19:51:56 localhost kernel: Loading compiled-in X.509 certificates
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 01 19:51:56 localhost kernel: Demotion targets for Node 0: null
Dec 01 19:51:56 localhost kernel: page_owner is disabled
Dec 01 19:51:56 localhost kernel: Key type .fscrypt registered
Dec 01 19:51:56 localhost kernel: Key type fscrypt-provisioning registered
Dec 01 19:51:56 localhost kernel: Key type big_key registered
Dec 01 19:51:56 localhost kernel: Key type encrypted registered
Dec 01 19:51:56 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 01 19:51:56 localhost kernel: Loading compiled-in module X.509 certificates
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec 01 19:51:56 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 01 19:51:56 localhost kernel: ima: No architecture policies found
Dec 01 19:51:56 localhost kernel: evm: Initialising EVM extended attributes:
Dec 01 19:51:56 localhost kernel: evm: security.selinux
Dec 01 19:51:56 localhost kernel: evm: security.SMACK64 (disabled)
Dec 01 19:51:56 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 01 19:51:56 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 01 19:51:56 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 01 19:51:56 localhost kernel: evm: security.apparmor (disabled)
Dec 01 19:51:56 localhost kernel: evm: security.ima
Dec 01 19:51:56 localhost kernel: evm: security.capability
Dec 01 19:51:56 localhost kernel: evm: HMAC attrs: 0x1
Dec 01 19:51:56 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 01 19:51:56 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 01 19:51:56 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 01 19:51:56 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 01 19:51:56 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 01 19:51:56 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 01 19:51:56 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 01 19:51:56 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 01 19:51:56 localhost kernel: Running certificate verification RSA selftest
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 01 19:51:56 localhost kernel: Running certificate verification ECDSA selftest
Dec 01 19:51:56 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 01 19:51:56 localhost kernel: clk: Disabling unused clocks
Dec 01 19:51:56 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 01 19:51:56 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 01 19:51:56 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 01 19:51:56 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Dec 01 19:51:56 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 01 19:51:56 localhost kernel: Run /init as init process
Dec 01 19:51:56 localhost kernel:   with arguments:
Dec 01 19:51:56 localhost kernel:     /init
Dec 01 19:51:56 localhost kernel:   with environment:
Dec 01 19:51:56 localhost kernel:     HOME=/
Dec 01 19:51:56 localhost kernel:     TERM=linux
Dec 01 19:51:56 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Dec 01 19:51:56 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 01 19:51:56 localhost systemd[1]: Detected virtualization kvm.
Dec 01 19:51:56 localhost systemd[1]: Detected architecture x86-64.
Dec 01 19:51:56 localhost systemd[1]: Running in initrd.
Dec 01 19:51:56 localhost systemd[1]: No hostname configured, using default hostname.
Dec 01 19:51:56 localhost systemd[1]: Hostname set to <localhost>.
Dec 01 19:51:56 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 01 19:51:56 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 01 19:51:56 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 01 19:51:56 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 01 19:51:56 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 01 19:51:56 localhost systemd[1]: Reached target Local File Systems.
Dec 01 19:51:56 localhost systemd[1]: Reached target Path Units.
Dec 01 19:51:56 localhost systemd[1]: Reached target Slice Units.
Dec 01 19:51:56 localhost systemd[1]: Reached target Swaps.
Dec 01 19:51:56 localhost systemd[1]: Reached target Timer Units.
Dec 01 19:51:56 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 01 19:51:56 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 01 19:51:56 localhost systemd[1]: Listening on Journal Socket.
Dec 01 19:51:56 localhost systemd[1]: Listening on udev Control Socket.
Dec 01 19:51:56 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 01 19:51:56 localhost systemd[1]: Reached target Socket Units.
Dec 01 19:51:56 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 01 19:51:56 localhost systemd[1]: Starting Journal Service...
Dec 01 19:51:56 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 01 19:51:56 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 01 19:51:56 localhost systemd[1]: Starting Create System Users...
Dec 01 19:51:56 localhost systemd[1]: Starting Setup Virtual Console...
Dec 01 19:51:56 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 01 19:51:56 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 01 19:51:56 localhost systemd[1]: Finished Create System Users.
Dec 01 19:51:56 localhost systemd-journald[307]: Journal started
Dec 01 19:51:56 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/6d7269d0ae074538adba52753671c0ef) is 8.0M, max 153.6M, 145.6M free.
Dec 01 19:51:56 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Dec 01 19:51:56 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Dec 01 19:51:56 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 01 19:51:56 localhost systemd[1]: Started Journal Service.
Dec 01 19:51:56 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 01 19:51:56 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 01 19:51:56 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 01 19:51:56 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 01 19:51:56 localhost systemd[1]: Finished Setup Virtual Console.
Dec 01 19:51:56 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 01 19:51:56 localhost systemd[1]: Starting dracut cmdline hook...
Dec 01 19:51:56 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Dec 01 19:51:56 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 01 19:51:56 localhost systemd[1]: Finished dracut cmdline hook.
Dec 01 19:51:56 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 01 19:51:56 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 01 19:51:56 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 01 19:51:56 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 01 19:51:56 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 01 19:51:56 localhost kernel: RPC: Registered udp transport module.
Dec 01 19:51:56 localhost kernel: RPC: Registered tcp transport module.
Dec 01 19:51:56 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 01 19:51:56 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 01 19:51:57 localhost rpc.statd[446]: Version 2.5.4 starting
Dec 01 19:51:57 localhost rpc.statd[446]: Initializing NSM state
Dec 01 19:51:57 localhost rpc.idmapd[451]: Setting log level to 0
Dec 01 19:51:57 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 01 19:51:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 01 19:51:57 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Dec 01 19:51:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 01 19:51:57 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 01 19:51:57 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 01 19:51:57 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 01 19:51:57 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 01 19:51:57 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 01 19:51:57 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 01 19:51:57 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 01 19:51:57 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 01 19:51:57 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 01 19:51:57 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 01 19:51:57 localhost systemd[1]: Reached target Network.
Dec 01 19:51:57 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 01 19:51:57 localhost systemd[1]: Starting dracut initqueue hook...
Dec 01 19:51:57 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 01 19:51:57 localhost systemd[1]: Reached target System Initialization.
Dec 01 19:51:57 localhost systemd[1]: Reached target Basic System.
Dec 01 19:51:57 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 01 19:51:57 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 01 19:51:57 localhost kernel:  vda: vda1
Dec 01 19:51:57 localhost systemd-udevd[501]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 19:51:57 localhost kernel: libata version 3.00 loaded.
Dec 01 19:51:57 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 01 19:51:57 localhost kernel: scsi host0: ata_piix
Dec 01 19:51:57 localhost kernel: scsi host1: ata_piix
Dec 01 19:51:57 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 01 19:51:57 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 01 19:51:57 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec 01 19:51:57 localhost systemd[1]: Reached target Initrd Root Device.
Dec 01 19:51:57 localhost kernel: ata1: found unknown device (class 0)
Dec 01 19:51:57 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 01 19:51:57 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 01 19:51:57 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 01 19:51:57 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 01 19:51:57 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 01 19:51:57 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 01 19:51:57 localhost systemd[1]: Finished dracut initqueue hook.
Dec 01 19:51:57 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 01 19:51:57 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 01 19:51:57 localhost systemd[1]: Reached target Remote File Systems.
Dec 01 19:51:57 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 01 19:51:57 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 01 19:51:57 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Dec 01 19:51:57 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec 01 19:51:57 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec 01 19:51:57 localhost systemd[1]: Mounting /sysroot...
Dec 01 19:51:58 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 01 19:51:58 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Dec 01 19:51:58 localhost kernel: XFS (vda1): Ending clean mount
Dec 01 19:51:58 localhost systemd[1]: Mounted /sysroot.
Dec 01 19:51:58 localhost systemd[1]: Reached target Initrd Root File System.
Dec 01 19:51:58 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 01 19:51:58 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 01 19:51:58 localhost systemd[1]: Reached target Initrd File Systems.
Dec 01 19:51:58 localhost systemd[1]: Reached target Initrd Default Target.
Dec 01 19:51:58 localhost systemd[1]: Starting dracut mount hook...
Dec 01 19:51:58 localhost systemd[1]: Finished dracut mount hook.
Dec 01 19:51:58 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 01 19:51:58 localhost rpc.idmapd[451]: exiting on signal 15
Dec 01 19:51:58 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 01 19:51:58 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 01 19:51:58 localhost systemd[1]: Stopped target Network.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Timer Units.
Dec 01 19:51:58 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 01 19:51:58 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Basic System.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Path Units.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Remote File Systems.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Slice Units.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Socket Units.
Dec 01 19:51:58 localhost systemd[1]: Stopped target System Initialization.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Local File Systems.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Swaps.
Dec 01 19:51:58 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped dracut mount hook.
Dec 01 19:51:58 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 01 19:51:58 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 01 19:51:58 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 01 19:51:58 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 01 19:51:58 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 01 19:51:58 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 01 19:51:58 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 01 19:51:58 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 01 19:51:58 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 01 19:51:58 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 01 19:51:58 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 01 19:51:58 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Closed udev Control Socket.
Dec 01 19:51:58 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Closed udev Kernel Socket.
Dec 01 19:51:58 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 01 19:51:58 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 01 19:51:58 localhost systemd[1]: Starting Cleanup udev Database...
Dec 01 19:51:58 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 01 19:51:58 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 01 19:51:58 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Stopped Create System Users.
Dec 01 19:51:58 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 01 19:51:58 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 01 19:51:58 localhost systemd[1]: Finished Cleanup udev Database.
Dec 01 19:51:58 localhost systemd[1]: Reached target Switch Root.
Dec 01 19:51:58 localhost systemd[1]: Starting Switch Root...
Dec 01 19:51:58 localhost systemd[1]: Switching root.
Dec 01 19:51:58 localhost systemd-journald[307]: Journal stopped
Dec 01 19:51:59 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Dec 01 19:51:59 localhost kernel: audit: type=1404 audit(1764618718.873:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 01 19:51:59 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 19:51:59 localhost kernel: SELinux:  policy capability open_perms=1
Dec 01 19:51:59 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 19:51:59 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 01 19:51:59 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 19:51:59 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 19:51:59 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 19:51:59 localhost kernel: audit: type=1403 audit(1764618718.998:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 01 19:51:59 localhost systemd[1]: Successfully loaded SELinux policy in 128.668ms.
Dec 01 19:51:59 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.380ms.
Dec 01 19:51:59 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 01 19:51:59 localhost systemd[1]: Detected virtualization kvm.
Dec 01 19:51:59 localhost systemd[1]: Detected architecture x86-64.
Dec 01 19:51:59 localhost systemd-rc-local-generator[641]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 19:51:59 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 01 19:51:59 localhost systemd[1]: Stopped Switch Root.
Dec 01 19:51:59 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 01 19:51:59 localhost systemd[1]: Created slice Slice /system/getty.
Dec 01 19:51:59 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 01 19:51:59 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 01 19:51:59 localhost systemd[1]: Created slice User and Session Slice.
Dec 01 19:51:59 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 01 19:51:59 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 01 19:51:59 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 01 19:51:59 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 01 19:51:59 localhost systemd[1]: Stopped target Switch Root.
Dec 01 19:51:59 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 01 19:51:59 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 01 19:51:59 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 01 19:51:59 localhost systemd[1]: Reached target Path Units.
Dec 01 19:51:59 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 01 19:51:59 localhost systemd[1]: Reached target Slice Units.
Dec 01 19:51:59 localhost systemd[1]: Reached target Swaps.
Dec 01 19:51:59 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 01 19:51:59 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 01 19:51:59 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 01 19:51:59 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 01 19:51:59 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 01 19:51:59 localhost systemd[1]: Listening on udev Control Socket.
Dec 01 19:51:59 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 01 19:51:59 localhost systemd[1]: Mounting Huge Pages File System...
Dec 01 19:51:59 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 01 19:51:59 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 01 19:51:59 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 01 19:51:59 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 01 19:51:59 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 01 19:51:59 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 01 19:51:59 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 01 19:51:59 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 01 19:51:59 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 01 19:51:59 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 01 19:51:59 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 01 19:51:59 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 01 19:51:59 localhost systemd[1]: Stopped Journal Service.
Dec 01 19:51:59 localhost kernel: fuse: init (API version 7.37)
Dec 01 19:51:59 localhost systemd[1]: Starting Journal Service...
Dec 01 19:51:59 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 01 19:51:59 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 01 19:51:59 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 01 19:51:59 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 01 19:51:59 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 01 19:51:59 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 01 19:51:59 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 01 19:51:59 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 01 19:51:59 localhost systemd[1]: Mounted Huge Pages File System.
Dec 01 19:51:59 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 01 19:51:59 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 01 19:51:59 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 01 19:51:59 localhost systemd-journald[682]: Journal started
Dec 01 19:51:59 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec 01 19:51:59 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 01 19:51:59 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 01 19:51:59 localhost systemd[1]: Started Journal Service.
Dec 01 19:51:59 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 01 19:51:59 localhost kernel: ACPI: bus type drm_connector registered
Dec 01 19:51:59 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 01 19:51:59 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 01 19:51:59 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 01 19:51:59 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 01 19:51:59 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 01 19:51:59 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 01 19:51:59 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 01 19:51:59 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 01 19:51:59 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 01 19:51:59 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 01 19:51:59 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 01 19:51:59 localhost systemd[1]: Mounting FUSE Control File System...
Dec 01 19:51:59 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 01 19:51:59 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 01 19:51:59 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 01 19:51:59 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 01 19:51:59 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 01 19:51:59 localhost systemd[1]: Starting Create System Users...
Dec 01 19:51:59 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec 01 19:51:59 localhost systemd-journald[682]: Received client request to flush runtime journal.
Dec 01 19:51:59 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 01 19:51:59 localhost systemd[1]: Mounted FUSE Control File System.
Dec 01 19:51:59 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 01 19:51:59 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 01 19:51:59 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 01 19:51:59 localhost systemd[1]: Finished Create System Users.
Dec 01 19:51:59 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 01 19:51:59 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 01 19:51:59 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 01 19:51:59 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 01 19:51:59 localhost systemd[1]: Reached target Local File Systems.
Dec 01 19:51:59 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 01 19:51:59 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 01 19:51:59 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 01 19:51:59 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 01 19:51:59 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 01 19:51:59 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 01 19:51:59 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 01 19:51:59 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Dec 01 19:51:59 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 01 19:51:59 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 01 19:51:59 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 01 19:51:59 localhost systemd[1]: Starting Security Auditing Service...
Dec 01 19:51:59 localhost systemd[1]: Starting RPC Bind...
Dec 01 19:51:59 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 01 19:51:59 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 01 19:51:59 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 01 19:51:59 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 01 19:51:59 localhost augenrules[710]: /sbin/augenrules: No change
Dec 01 19:51:59 localhost systemd[1]: Started RPC Bind.
Dec 01 19:51:59 localhost augenrules[725]: No rules
Dec 01 19:51:59 localhost augenrules[725]: enabled 1
Dec 01 19:51:59 localhost augenrules[725]: failure 1
Dec 01 19:51:59 localhost augenrules[725]: pid 705
Dec 01 19:51:59 localhost augenrules[725]: rate_limit 0
Dec 01 19:51:59 localhost augenrules[725]: backlog_limit 8192
Dec 01 19:51:59 localhost augenrules[725]: lost 0
Dec 01 19:51:59 localhost augenrules[725]: backlog 0
Dec 01 19:51:59 localhost augenrules[725]: backlog_wait_time 60000
Dec 01 19:51:59 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 01 19:51:59 localhost augenrules[725]: enabled 1
Dec 01 19:51:59 localhost augenrules[725]: failure 1
Dec 01 19:51:59 localhost augenrules[725]: pid 705
Dec 01 19:51:59 localhost augenrules[725]: rate_limit 0
Dec 01 19:51:59 localhost augenrules[725]: backlog_limit 8192
Dec 01 19:51:59 localhost augenrules[725]: lost 0
Dec 01 19:51:59 localhost augenrules[725]: backlog 0
Dec 01 19:51:59 localhost augenrules[725]: backlog_wait_time 60000
Dec 01 19:51:59 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 01 19:51:59 localhost augenrules[725]: enabled 1
Dec 01 19:51:59 localhost augenrules[725]: failure 1
Dec 01 19:51:59 localhost augenrules[725]: pid 705
Dec 01 19:51:59 localhost augenrules[725]: rate_limit 0
Dec 01 19:51:59 localhost augenrules[725]: backlog_limit 8192
Dec 01 19:51:59 localhost augenrules[725]: lost 0
Dec 01 19:51:59 localhost augenrules[725]: backlog 3
Dec 01 19:51:59 localhost augenrules[725]: backlog_wait_time 60000
Dec 01 19:51:59 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 01 19:51:59 localhost systemd[1]: Started Security Auditing Service.
Dec 01 19:51:59 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 01 19:51:59 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 01 19:52:00 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 01 19:52:00 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 01 19:52:00 localhost systemd[1]: Starting Update is Completed...
Dec 01 19:52:00 localhost systemd[1]: Finished Update is Completed.
Dec 01 19:52:00 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Dec 01 19:52:00 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 01 19:52:00 localhost systemd[1]: Reached target System Initialization.
Dec 01 19:52:00 localhost systemd[1]: Started dnf makecache --timer.
Dec 01 19:52:00 localhost systemd[1]: Started Daily rotation of log files.
Dec 01 19:52:00 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 01 19:52:00 localhost systemd[1]: Reached target Timer Units.
Dec 01 19:52:00 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 01 19:52:00 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 01 19:52:00 localhost systemd[1]: Reached target Socket Units.
Dec 01 19:52:00 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 19:52:00 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 01 19:52:00 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 01 19:52:00 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 01 19:52:00 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 01 19:52:00 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 01 19:52:00 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 01 19:52:00 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 01 19:52:00 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 01 19:52:00 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 01 19:52:00 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 01 19:52:00 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 01 19:52:00 localhost dbus-broker-lau[772]: Ready
Dec 01 19:52:00 localhost systemd[1]: Reached target Basic System.
Dec 01 19:52:00 localhost systemd[1]: Starting NTP client/server...
Dec 01 19:52:00 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 01 19:52:00 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 01 19:52:00 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 01 19:52:00 localhost systemd[1]: Started irqbalance daemon.
Dec 01 19:52:00 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 01 19:52:00 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 01 19:52:00 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 01 19:52:00 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 01 19:52:00 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 01 19:52:00 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 01 19:52:00 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 01 19:52:00 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 01 19:52:00 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 01 19:52:00 localhost kernel: Console: switching to colour dummy device 80x25
Dec 01 19:52:00 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 01 19:52:00 localhost kernel: [drm] features: -context_init
Dec 01 19:52:00 localhost kernel: [drm] number of scanouts: 1
Dec 01 19:52:00 localhost kernel: [drm] number of cap sets: 0
Dec 01 19:52:00 localhost chronyd[798]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 01 19:52:00 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 01 19:52:00 localhost systemd[1]: Starting User Login Management...
Dec 01 19:52:00 localhost chronyd[798]: Loaded 0 symmetric keys
Dec 01 19:52:00 localhost chronyd[798]: Using right/UTC timezone to obtain leap second data
Dec 01 19:52:00 localhost chronyd[798]: Loaded seccomp filter (level 2)
Dec 01 19:52:00 localhost systemd[1]: Started NTP client/server.
Dec 01 19:52:00 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 01 19:52:00 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 01 19:52:00 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 01 19:52:00 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 01 19:52:00 localhost systemd-logind[796]: New seat seat0.
Dec 01 19:52:00 localhost systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 01 19:52:00 localhost systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 01 19:52:00 localhost systemd[1]: Started User Login Management.
Dec 01 19:52:00 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 01 19:52:00 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 01 19:52:00 localhost kernel: kvm_amd: TSC scaling supported
Dec 01 19:52:00 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 01 19:52:00 localhost kernel: kvm_amd: Nested Paging enabled
Dec 01 19:52:00 localhost kernel: kvm_amd: LBR virtualization supported
Dec 01 19:52:00 localhost iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Dec 01 19:52:00 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 01 19:52:00 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 01 Dec 2025 19:52:00 +0000. Up 6.65 seconds.
Dec 01 19:52:01 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 01 19:52:01 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 01 19:52:01 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpuof_15c8.mount: Deactivated successfully.
Dec 01 19:52:01 localhost systemd[1]: Starting Hostname Service...
Dec 01 19:52:01 localhost systemd[1]: Started Hostname Service.
Dec 01 19:52:01 np0005541545.novalocal systemd-hostnamed[856]: Hostname set to <np0005541545.novalocal> (static)
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Reached target Preparation for Network.
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Starting Network Manager...
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.4815] NetworkManager (version 1.54.1-1.el9) is starting... (boot:59640efb-8f9f-4203-9412-dcffb4b15890)
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.4820] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.4909] manager[0x556967aa9080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.4950] hostname: hostname: using hostnamed
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.4951] hostname: static hostname changed from (none) to "np0005541545.novalocal"
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.4954] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5122] manager[0x556967aa9080]: rfkill: Wi-Fi hardware radio set enabled
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5123] manager[0x556967aa9080]: rfkill: WWAN hardware radio set enabled
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5163] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5163] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5164] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5164] manager: Networking is enabled by state file
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5166] settings: Loaded settings plugin: keyfile (internal)
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5194] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5211] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5222] dhcp: init: Using DHCP client 'internal'
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5224] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5235] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5242] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5248] device (lo): Activation: starting connection 'lo' (0db15d13-256d-400b-ba95-6470a1e83921)
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5255] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5258] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5305] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5313] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5317] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5320] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5324] device (eth0): carrier: link connected
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5330] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5341] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5354] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Started Network Manager.
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5362] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5364] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5368] manager: NetworkManager state is now CONNECTING
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Reached target Network.
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5370] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5381] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5386] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5433] dhcp4 (eth0): state changed new lease, address=38.102.83.214
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5445] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5477] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5555] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5558] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5567] device (lo): Activation: successful, device activated.
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5579] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5581] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5587] manager: NetworkManager state is now CONNECTED_SITE
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5592] device (eth0): Activation: successful, device activated.
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5600] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 01 19:52:01 np0005541545.novalocal NetworkManager[860]: <info>  [1764618721.5605] manager: startup complete
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Reached target NFS client services.
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Reached target Remote File Systems.
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 01 19:52:01 np0005541545.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 01 Dec 2025 19:52:01 +0000. Up 7.61 seconds.
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.214         | 255.255.255.0 | global | fa:16:3e:53:5b:52 |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe53:5b52/64 |       .       |  link  | fa:16:3e:53:5b:52 |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 01 19:52:01 np0005541545.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 01 19:52:02 np0005541545.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 01 19:52:02 np0005541545.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Dec 01 19:52:02 np0005541545.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 01 19:52:02 np0005541545.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Dec 01 19:52:02 np0005541545.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Dec 01 19:52:02 np0005541545.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Dec 01 19:52:02 np0005541545.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Generating public/private rsa key pair.
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: The key fingerprint is:
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: SHA256:dxFeJfHSRmA8a1vMdyNrLrE0Ml+It3jGNPEOJOhFb5M root@np0005541545.novalocal
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: The key's randomart image is:
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: +---[RSA 3072]----+
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |            ..*+o|
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |         . . +o= |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |        o . + .=+|
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |       . o E oo+*|
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |      . S * *.oo+|
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |       . = @ =.  |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |          X %    |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |         . O o   |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |          o .    |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: The key fingerprint is:
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: SHA256:DkDBD086icOctHx+e+EigxoirqYoOZ4uFWXX6Ak3ruc root@np0005541545.novalocal
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: The key's randomart image is:
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: +---[ECDSA 256]---+
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |   .o. o         |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |  ..* * .        |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: | = *.& o         |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |  X =.*          |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |   = o. S        |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |  . o oo.        |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |+o . + o..       |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |Xoo o E o        |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |&B   o o         |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: The key fingerprint is:
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: SHA256:JgUmuTqVLssg3yHOJ4E2dzg8Wuos84UZ60op0xxBTDI root@np0005541545.novalocal
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: The key's randomart image is:
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: +--[ED25519 256]--+
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |Eoo ..o          |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: | +. .o .         |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |  .  o  .        |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |   .+  .         |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: | .++. . S        |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |+=B&o. o         |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |=O%B=.           |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |*=B.o            |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: |+*oo             |
Dec 01 19:52:03 np0005541545.novalocal cloud-init[923]: +----[SHA256]-----+
Dec 01 19:52:03 np0005541545.novalocal sm-notify[1005]: Version 2.5.4 starting
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Reached target Network is Online.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting System Logging Service...
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting Permit User Sessions...
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 01 19:52:03 np0005541545.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Dec 01 19:52:03 np0005541545.novalocal sshd[1007]: Server listening on :: port 22.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Finished Permit User Sessions.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Started Command Scheduler.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Started Getty on tty1.
Dec 01 19:52:03 np0005541545.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Dec 01 19:52:03 np0005541545.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 01 19:52:03 np0005541545.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 89% if used.)
Dec 01 19:52:03 np0005541545.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Reached target Login Prompts.
Dec 01 19:52:03 np0005541545.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Started System Logging Service.
Dec 01 19:52:03 np0005541545.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Reached target Multi-User System.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 01 19:52:03 np0005541545.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 19:52:03 np0005541545.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Dec 01 19:52:03 np0005541545.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Dec 01 19:52:03 np0005541545.novalocal cloud-init[1138]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 01 Dec 2025 19:52:03 +0000. Up 9.21 seconds.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 01 19:52:03 np0005541545.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1216]: Unable to negotiate with 38.102.83.114 port 33356: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1234]: Unable to negotiate with 38.102.83.114 port 33376: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1244]: Unable to negotiate with 38.102.83.114 port 33382: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1201]: Connection closed by 38.102.83.114 port 33350 [preauth]
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1251]: Connection reset by 38.102.83.114 port 33396 [preauth]
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1222]: Connection closed by 38.102.83.114 port 33362 [preauth]
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1277]: Unable to negotiate with 38.102.83.114 port 33414: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 01 19:52:03 np0005541545.novalocal dracut[1282]: dracut-057-102.git20250818.el9
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1283]: Unable to negotiate with 38.102.83.114 port 33426: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 01 19:52:03 np0005541545.novalocal sshd-session[1269]: Connection closed by 38.102.83.114 port 33404 [preauth]
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 01 Dec 2025 19:52:03 +0000. Up 9.69 seconds.
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1331]: #############################################################
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1334]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1340]: 256 SHA256:DkDBD086icOctHx+e+EigxoirqYoOZ4uFWXX6Ak3ruc root@np0005541545.novalocal (ECDSA)
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1349]: 256 SHA256:JgUmuTqVLssg3yHOJ4E2dzg8Wuos84UZ60op0xxBTDI root@np0005541545.novalocal (ED25519)
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1356]: 3072 SHA256:dxFeJfHSRmA8a1vMdyNrLrE0Ml+It3jGNPEOJOhFb5M root@np0005541545.novalocal (RSA)
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1357]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1361]: #############################################################
Dec 01 19:52:04 np0005541545.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Mon, 01 Dec 2025 19:52:04 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.88 seconds
Dec 01 19:52:04 np0005541545.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 01 19:52:04 np0005541545.novalocal systemd[1]: Reached target Cloud-init target.
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 01 19:52:04 np0005541545.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: memstrack is not available
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: memstrack is not available
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 01 19:52:05 np0005541545.novalocal dracut[1286]: *** Including module: systemd ***
Dec 01 19:52:06 np0005541545.novalocal dracut[1286]: *** Including module: fips ***
Dec 01 19:52:06 np0005541545.novalocal chronyd[798]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Dec 01 19:52:06 np0005541545.novalocal chronyd[798]: System clock TAI offset set to 37 seconds
Dec 01 19:52:06 np0005541545.novalocal dracut[1286]: *** Including module: systemd-initrd ***
Dec 01 19:52:06 np0005541545.novalocal dracut[1286]: *** Including module: i18n ***
Dec 01 19:52:06 np0005541545.novalocal dracut[1286]: *** Including module: drm ***
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]: *** Including module: prefixdevname ***
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]: *** Including module: kernel-modules ***
Dec 01 19:52:07 np0005541545.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]: *** Including module: kernel-modules-extra ***
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]: *** Including module: qemu ***
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]: *** Including module: fstab-sys ***
Dec 01 19:52:07 np0005541545.novalocal dracut[1286]: *** Including module: rootfs-block ***
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: *** Including module: terminfo ***
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: *** Including module: udev-rules ***
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: Skipping udev rule: 91-permissions.rules
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: *** Including module: virtiofs ***
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: *** Including module: dracut-systemd ***
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: *** Including module: usrmount ***
Dec 01 19:52:08 np0005541545.novalocal dracut[1286]: *** Including module: base ***
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]: *** Including module: fs-lib ***
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]: *** Including module: kdumpbase ***
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:   microcode_ctl module: mangling fw_dir
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 01 19:52:09 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]: *** Including module: openssl ***
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]: *** Including module: shutdown ***
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]: *** Including module: squash ***
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]: *** Including modules done ***
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]: *** Installing kernel module dependencies ***
Dec 01 19:52:10 np0005541545.novalocal dracut[1286]: *** Installing kernel module dependencies done ***
Dec 01 19:52:11 np0005541545.novalocal dracut[1286]: *** Resolving executable dependencies ***
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: IRQ 25 affinity is now unmanaged
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: IRQ 31 affinity is now unmanaged
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: IRQ 28 affinity is now unmanaged
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: IRQ 32 affinity is now unmanaged
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: IRQ 30 affinity is now unmanaged
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 01 19:52:11 np0005541545.novalocal irqbalance[789]: IRQ 29 affinity is now unmanaged
Dec 01 19:52:11 np0005541545.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 01 19:52:12 np0005541545.novalocal dracut[1286]: *** Resolving executable dependencies done ***
Dec 01 19:52:12 np0005541545.novalocal dracut[1286]: *** Generating early-microcode cpio image ***
Dec 01 19:52:12 np0005541545.novalocal dracut[1286]: *** Store current command line parameters ***
Dec 01 19:52:12 np0005541545.novalocal dracut[1286]: Stored kernel commandline:
Dec 01 19:52:12 np0005541545.novalocal dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Dec 01 19:52:12 np0005541545.novalocal dracut[1286]: *** Install squash loader ***
Dec 01 19:52:13 np0005541545.novalocal dracut[1286]: *** Squashing the files inside the initramfs ***
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: *** Squashing the files inside the initramfs done ***
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: *** Hardlinking files ***
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: Mode:           real
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: Files:          50
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: Linked:         0 files
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: Compared:       0 xattrs
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: Compared:       0 files
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: Saved:          0 B
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: Duration:       0.001017 seconds
Dec 01 19:52:14 np0005541545.novalocal dracut[1286]: *** Hardlinking files done ***
Dec 01 19:52:15 np0005541545.novalocal dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Dec 01 19:52:15 np0005541545.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Dec 01 19:52:15 np0005541545.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Dec 01 19:52:15 np0005541545.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 01 19:52:15 np0005541545.novalocal systemd[1]: Startup finished in 1.800s (kernel) + 2.792s (initrd) + 17.080s (userspace) = 21.673s.
Dec 01 19:52:17 np0005541545.novalocal sshd-session[4295]: Accepted publickey for zuul from 38.102.83.114 port 40192 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 01 19:52:17 np0005541545.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 01 19:52:17 np0005541545.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 01 19:52:17 np0005541545.novalocal systemd-logind[796]: New session 1 of user zuul.
Dec 01 19:52:17 np0005541545.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 01 19:52:17 np0005541545.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Queued start job for default target Main User Target.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Created slice User Application Slice.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Reached target Paths.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Reached target Timers.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Starting D-Bus User Message Bus Socket...
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Starting Create User's Volatile Files and Directories...
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Finished Create User's Volatile Files and Directories.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Listening on D-Bus User Message Bus Socket.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Reached target Sockets.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Reached target Basic System.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Reached target Main User Target.
Dec 01 19:52:17 np0005541545.novalocal systemd[4299]: Startup finished in 143ms.
Dec 01 19:52:17 np0005541545.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 01 19:52:17 np0005541545.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 01 19:52:17 np0005541545.novalocal sshd-session[4295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 19:52:18 np0005541545.novalocal python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 19:52:20 np0005541545.novalocal python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 19:52:26 np0005541545.novalocal python3[4467]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 19:52:27 np0005541545.novalocal python3[4507]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 01 19:52:29 np0005541545.novalocal python3[4533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDboG9/R81htqCLzIqBNzmKwYWrhUVSq9bCWgzJrwVlVDNXHsbkCwlikbKIk3ot7DaLkXWnGzxazTL4GqoiYDRVHGN2WyGpWrfJPMdVSxqmjL7tTqr6IGSa/lCGsOfQjj7u59XL3MOc6T/R2tb9gjxhMBnt2u0r1qW0TdgxnTbo5COtGoL5JFJIEiXJXh10e6iO63IcME1LegCcXV8vxWdL6U0hOh8A5c+gkrxARl2PbKC7fNS+birgnsAnKqF7xedmHHLjqRB2OUo3eURi+bwmj6WwqAKLC+o0uJRmLXV4MR/dvR2Hor+Z3Ims45vfIlvXeFyUtfd/dIzsyFbWLgyvfPEqZWopgi4fgmV6Q5IJ4X5rY2HZmsiGN62irrInr1fMWYXiETxaDxzXeipcMODLlCjTi9J/DQjoaygUocLe86QHWI3iJ5rOgzWlvT0bf/7NKJhHG5EQZRVBGx9Z7zDJHCJrEY01okyOwS/+xpO1llUB7osccaxyzCye6uPZnJ0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:29 np0005541545.novalocal python3[4557]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:30 np0005541545.novalocal python3[4656]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:52:30 np0005541545.novalocal python3[4727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764618750.0988743-207-171075804799435/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=126dba2c75b5403796579a48eed7961c_id_rsa follow=False checksum=e0d64e7da5ca0900f899a50c969bef71d48aa59a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:31 np0005541545.novalocal python3[4850]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:52:31 np0005541545.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 01 19:52:31 np0005541545.novalocal python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764618751.0995815-240-212188422788749/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=126dba2c75b5403796579a48eed7961c_id_rsa.pub follow=False checksum=ad23ce73caacf1404940f5e2b04d9e5b1a0b6957 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:33 np0005541545.novalocal python3[4971]: ansible-ping Invoked with data=pong
Dec 01 19:52:34 np0005541545.novalocal python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 19:52:36 np0005541545.novalocal python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 01 19:52:37 np0005541545.novalocal python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:38 np0005541545.novalocal python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:38 np0005541545.novalocal python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:38 np0005541545.novalocal python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:38 np0005541545.novalocal python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:39 np0005541545.novalocal python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:40 np0005541545.novalocal sudo[5229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umxavtiaqlthmuzcukdgtodlwvbifnxp ; /usr/bin/python3'
Dec 01 19:52:40 np0005541545.novalocal sudo[5229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:40 np0005541545.novalocal python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:40 np0005541545.novalocal sudo[5229]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:41 np0005541545.novalocal sudo[5307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxucjxfezglqgazaepiderpdobuasray ; /usr/bin/python3'
Dec 01 19:52:41 np0005541545.novalocal sudo[5307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:41 np0005541545.novalocal python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:52:41 np0005541545.novalocal sudo[5307]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:41 np0005541545.novalocal sudo[5380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbsdqzbwghunwtuokqoytvumcxlsnxuu ; /usr/bin/python3'
Dec 01 19:52:41 np0005541545.novalocal sudo[5380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:41 np0005541545.novalocal python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764618760.9045432-21-258531696765025/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:41 np0005541545.novalocal sudo[5380]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:42 np0005541545.novalocal python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:42 np0005541545.novalocal python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:43 np0005541545.novalocal python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:43 np0005541545.novalocal python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:43 np0005541545.novalocal python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:43 np0005541545.novalocal python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:44 np0005541545.novalocal python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:44 np0005541545.novalocal python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:44 np0005541545.novalocal python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:45 np0005541545.novalocal python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:45 np0005541545.novalocal python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:45 np0005541545.novalocal python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:45 np0005541545.novalocal python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:46 np0005541545.novalocal python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:46 np0005541545.novalocal python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:46 np0005541545.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:47 np0005541545.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:47 np0005541545.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:47 np0005541545.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:47 np0005541545.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:48 np0005541545.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:48 np0005541545.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:48 np0005541545.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:48 np0005541545.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:49 np0005541545.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:49 np0005541545.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 19:52:51 np0005541545.novalocal sudo[6054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvsvmekwpmxsxldihfeeyaaqwbtdeyrr ; /usr/bin/python3'
Dec 01 19:52:51 np0005541545.novalocal sudo[6054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:52 np0005541545.novalocal python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 01 19:52:52 np0005541545.novalocal systemd[1]: Starting Time & Date Service...
Dec 01 19:52:52 np0005541545.novalocal systemd[1]: Started Time & Date Service.
Dec 01 19:52:52 np0005541545.novalocal systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Dec 01 19:52:52 np0005541545.novalocal sudo[6054]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:52 np0005541545.novalocal sudo[6085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrvkgohktqyasyxwgzlxyxzltwvorcou ; /usr/bin/python3'
Dec 01 19:52:52 np0005541545.novalocal sudo[6085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:52 np0005541545.novalocal python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:52 np0005541545.novalocal sudo[6085]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:53 np0005541545.novalocal python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:52:53 np0005541545.novalocal python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764618772.8538847-153-792186012145/source _original_basename=tmp8hmg8s2t follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:53 np0005541545.novalocal python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:52:54 np0005541545.novalocal python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764618773.6958344-183-48284552053170/source _original_basename=tmpztuper6e follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:55 np0005541545.novalocal sudo[6505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njlohtsaviyiuolsgllcvgwicczeqibp ; /usr/bin/python3'
Dec 01 19:52:55 np0005541545.novalocal sudo[6505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:55 np0005541545.novalocal python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:52:55 np0005541545.novalocal sudo[6505]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:55 np0005541545.novalocal sudo[6578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acvvxsiftmkjzonbhlnrybbgrqsqrbbb ; /usr/bin/python3'
Dec 01 19:52:55 np0005541545.novalocal sudo[6578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:55 np0005541545.novalocal python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764618774.8353887-231-215590265722502/source _original_basename=tmphnnvc1oj follow=False checksum=6ccb10e811009b8a3fb6665900f19085a6ead209 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:55 np0005541545.novalocal sudo[6578]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:56 np0005541545.novalocal python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 19:52:56 np0005541545.novalocal python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 19:52:56 np0005541545.novalocal sudo[6732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtstevlhlqlisigtsatxczbljmgiabhu ; /usr/bin/python3'
Dec 01 19:52:56 np0005541545.novalocal sudo[6732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:56 np0005541545.novalocal python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:52:56 np0005541545.novalocal sudo[6732]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:57 np0005541545.novalocal sudo[6805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkovmpwuvvyedgkaqjptopvuuqxtrvpz ; /usr/bin/python3'
Dec 01 19:52:57 np0005541545.novalocal sudo[6805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:57 np0005541545.novalocal python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764618776.5721803-273-150110588237666/source _original_basename=tmpy2b3iirj follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:52:57 np0005541545.novalocal sudo[6805]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:57 np0005541545.novalocal sudo[6856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufohynobcirztjngttckcoiwimxqbvx ; /usr/bin/python3'
Dec 01 19:52:57 np0005541545.novalocal sudo[6856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:52:57 np0005541545.novalocal python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-ee73-3332-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 19:52:57 np0005541545.novalocal sudo[6856]: pam_unix(sudo:session): session closed for user root
Dec 01 19:52:58 np0005541545.novalocal python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-ee73-3332-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 01 19:52:59 np0005541545.novalocal python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:53:01 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 01 19:53:01 np0005541545.novalocal irqbalance[789]: IRQ 26 affinity is now unmanaged
Dec 01 19:53:17 np0005541545.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsumgabpxzhvnysdljiaeshmmpxkebvq ; /usr/bin/python3'
Dec 01 19:53:17 np0005541545.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:53:17 np0005541545.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:53:17 np0005541545.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Dec 01 19:53:22 np0005541545.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 01 19:53:53 np0005541545.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 01 19:53:53 np0005541545.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3371] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 01 19:53:53 np0005541545.novalocal systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3539] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3581] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3586] device (eth1): carrier: link connected
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3589] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3598] policy: auto-activating connection 'Wired connection 1' (7e823da8-b6f6-3eaa-b777-0c51f4576aa2)
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3603] device (eth1): Activation: starting connection 'Wired connection 1' (7e823da8-b6f6-3eaa-b777-0c51f4576aa2)
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3604] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3607] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3611] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 19:53:53 np0005541545.novalocal NetworkManager[860]: <info>  [1764618833.3616] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 01 19:53:54 np0005541545.novalocal python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-9378-078b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 19:53:58 np0005541545.novalocal sshd-session[6973]: Accepted publickey for zuul from 93.44.176.199 port 41920 ssh2: ED25519 SHA256:z36VxXeoDAIqjKJGq1nfXwPuvJNXq/XTSKoTDWY5Afk
Dec 01 19:53:58 np0005541545.novalocal systemd-logind[796]: New session 3 of user zuul.
Dec 01 19:53:58 np0005541545.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 01 19:53:58 np0005541545.novalocal sshd-session[6973]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 19:53:58 np0005541545.novalocal systemd[1]: Starting Hostname Service...
Dec 01 19:53:59 np0005541545.novalocal systemd[1]: Started Hostname Service.
Dec 01 19:54:04 np0005541545.novalocal sudo[7082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqgvqfzbllyzhyuapsqvwzwnmefamui ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 01 19:54:04 np0005541545.novalocal sudo[7082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:54:04 np0005541545.novalocal sudo[7085]:     zuul : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/bash
Dec 01 19:54:04 np0005541545.novalocal sudo[7085]: pam_unix(sudo-i:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:54:04 np0005541545.novalocal python3[7084]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:54:04 np0005541545.novalocal sudo[7082]: pam_unix(sudo:session): session closed for user root
Dec 01 19:54:04 np0005541545.novalocal sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtdabblsdafltcyadhddyofgmnkgdmds ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 01 19:54:04 np0005541545.novalocal sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:54:04 np0005541545.novalocal python3[7187]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764618844.0213878-102-60969436636604/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=764ff39fcc9d09d996bc8d30b314acc7bdce497b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:54:04 np0005541545.novalocal sudo[7185]: pam_unix(sudo:session): session closed for user root
Dec 01 19:54:05 np0005541545.novalocal sudo[7235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjtnswbunnvkklavtsqpenznomgzuynw ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 01 19:54:05 np0005541545.novalocal sudo[7235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:54:05 np0005541545.novalocal python3[7237]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Stopping Network Manager...
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6409] caught SIGTERM, shutting down normally.
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6426] dhcp4 (eth0): canceled DHCP transaction
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6426] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6426] dhcp4 (eth0): state changed no lease
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6429] manager: NetworkManager state is now CONNECTING
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6552] dhcp4 (eth1): canceled DHCP transaction
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6552] dhcp4 (eth1): state changed no lease
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[860]: <info>  [1764618845.6612] exiting (success)
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Stopped Network Manager.
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: NetworkManager.service: Consumed 1.139s CPU time, 10.0M memory peak.
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Starting Network Manager...
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7412] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:59640efb-8f9f-4203-9412-dcffb4b15890)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7413] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7475] manager[0x5626b6632070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7520] hostname: hostname: using hostnamed
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7520] hostname: static hostname changed from (none) to "np0005541545.novalocal"
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7524] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7528] manager[0x5626b6632070]: rfkill: Wi-Fi hardware radio set enabled
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7528] manager[0x5626b6632070]: rfkill: WWAN hardware radio set enabled
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7556] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7556] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7556] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7557] manager: Networking is enabled by state file
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7559] settings: Loaded settings plugin: keyfile (internal)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7564] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7593] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7605] dhcp: init: Using DHCP client 'internal'
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7609] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7616] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7624] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7635] device (lo): Activation: starting connection 'lo' (0db15d13-256d-400b-ba95-6470a1e83921)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7643] device (eth0): carrier: link connected
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7649] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7657] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7658] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7666] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7674] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7682] device (eth1): carrier: link connected
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7687] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7692] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (7e823da8-b6f6-3eaa-b777-0c51f4576aa2) (indicated)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7693] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7699] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7706] device (eth1): Activation: starting connection 'Wired connection 1' (7e823da8-b6f6-3eaa-b777-0c51f4576aa2)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7714] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Started Network Manager.
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7719] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7722] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7725] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7728] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7731] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7735] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7738] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7742] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7773] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7778] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7794] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7799] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7830] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7839] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7854] device (lo): Activation: successful, device activated.
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7870] dhcp4 (eth0): state changed new lease, address=38.102.83.214
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7886] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 01 19:54:05 np0005541545.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.7974] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal sudo[7235]: pam_unix(sudo:session): session closed for user root
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.8035] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.8040] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.8048] manager: NetworkManager state is now CONNECTED_SITE
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.8055] device (eth0): Activation: successful, device activated.
Dec 01 19:54:05 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618845.8066] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 01 19:54:06 np0005541545.novalocal python3[7321]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-9378-078b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 19:54:15 np0005541545.novalocal sudo[7085]: pam_unix(sudo-i:session): session closed for user root
Dec 01 19:54:15 np0005541545.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 01 19:54:23 np0005541545.novalocal systemd[4299]: Starting Mark boot as successful...
Dec 01 19:54:23 np0005541545.novalocal systemd[4299]: Finished Mark boot as successful.
Dec 01 19:54:35 np0005541545.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.2777] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 01 19:54:51 np0005541545.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 01 19:54:51 np0005541545.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3112] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3116] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3126] device (eth1): Activation: successful, device activated.
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3135] manager: startup complete
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3137] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <warn>  [1764618891.3147] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3155] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3293] dhcp4 (eth1): canceled DHCP transaction
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3294] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3294] dhcp4 (eth1): state changed no lease
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3312] policy: auto-activating connection 'ci-private-network' (f93e717b-0f4e-5511-a89b-ffe6fe3d8145)
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3317] device (eth1): Activation: starting connection 'ci-private-network' (f93e717b-0f4e-5511-a89b-ffe6fe3d8145)
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3318] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3321] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3329] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.3339] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.4205] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.4207] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 19:54:51 np0005541545.novalocal NetworkManager[7247]: <info>  [1764618891.4216] device (eth1): Activation: successful, device activated.
Dec 01 19:55:01 np0005541545.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 01 19:55:03 np0005541545.novalocal sudo[7428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofvgonmdsnzaozbqprcdlcuwgmmykjep ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 01 19:55:03 np0005541545.novalocal sudo[7428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:55:04 np0005541545.novalocal python3[7430]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 19:55:04 np0005541545.novalocal sudo[7428]: pam_unix(sudo:session): session closed for user root
Dec 01 19:55:04 np0005541545.novalocal sudo[7501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojrbxcbqnukfaaqqiuatggevxucedgko ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 01 19:55:04 np0005541545.novalocal sudo[7501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 19:55:04 np0005541545.novalocal python3[7503]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764618903.7959518-267-26429835863196/source _original_basename=tmpb9qy7921 follow=False checksum=e6109efb5a272ff5f7a2c19c214b4ea3cf68581f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 19:55:04 np0005541545.novalocal sudo[7501]: pam_unix(sudo:session): session closed for user root
Dec 01 19:55:11 np0005541545.novalocal irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 01 19:55:11 np0005541545.novalocal irqbalance[789]: IRQ 27 affinity is now unmanaged
Dec 01 19:56:04 np0005541545.novalocal sshd-session[4308]: Received disconnect from 38.102.83.114 port 40192:11: disconnected by user
Dec 01 19:56:04 np0005541545.novalocal sshd-session[4308]: Disconnected from user zuul 38.102.83.114 port 40192
Dec 01 19:56:04 np0005541545.novalocal sshd-session[4295]: pam_unix(sshd:session): session closed for user zuul
Dec 01 19:56:04 np0005541545.novalocal systemd-logind[796]: Session 1 logged out. Waiting for processes to exit.
Dec 01 19:57:23 np0005541545.novalocal systemd[4299]: Created slice User Background Tasks Slice.
Dec 01 19:57:23 np0005541545.novalocal systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Dec 01 19:57:23 np0005541545.novalocal systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Dec 01 20:01:01 np0005541545.novalocal CROND[7533]: (root) CMD (run-parts /etc/cron.hourly)
Dec 01 20:01:01 np0005541545.novalocal run-parts[7536]: (/etc/cron.hourly) starting 0anacron
Dec 01 20:01:01 np0005541545.novalocal anacron[7544]: Anacron started on 2025-12-01
Dec 01 20:01:01 np0005541545.novalocal anacron[7544]: Will run job `cron.daily' in 5 min.
Dec 01 20:01:01 np0005541545.novalocal anacron[7544]: Will run job `cron.weekly' in 25 min.
Dec 01 20:01:01 np0005541545.novalocal anacron[7544]: Will run job `cron.monthly' in 45 min.
Dec 01 20:01:01 np0005541545.novalocal anacron[7544]: Jobs will be executed sequentially
Dec 01 20:01:01 np0005541545.novalocal run-parts[7546]: (/etc/cron.hourly) finished 0anacron
Dec 01 20:01:01 np0005541545.novalocal CROND[7532]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 01 20:02:07 np0005541545.novalocal sshd-session[7549]: Accepted publickey for zuul from 38.102.83.114 port 40204 ssh2: RSA SHA256:oog7MteKjkTJ4LxwhsVGQb4CwQfo2OF07ZrpoP0w1bM
Dec 01 20:02:07 np0005541545.novalocal systemd-logind[796]: New session 4 of user zuul.
Dec 01 20:02:07 np0005541545.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 01 20:02:07 np0005541545.novalocal sshd-session[7549]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:02:07 np0005541545.novalocal sudo[7576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clnsuldcvxdaixnxvrvuqiyubjjnrjoj ; /usr/bin/python3'
Dec 01 20:02:07 np0005541545.novalocal sudo[7576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:07 np0005541545.novalocal python3[7578]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-3697-de24-000000001cd4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:02:07 np0005541545.novalocal sudo[7576]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:08 np0005541545.novalocal sudo[7605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgsqumdiqnlagpcbsndjbcpasqziwpjp ; /usr/bin/python3'
Dec 01 20:02:08 np0005541545.novalocal sudo[7605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:08 np0005541545.novalocal python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:02:08 np0005541545.novalocal sudo[7605]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:08 np0005541545.novalocal sudo[7631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvyobamakhmpphkpvcyfyprtsmjcgybt ; /usr/bin/python3'
Dec 01 20:02:08 np0005541545.novalocal sudo[7631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:08 np0005541545.novalocal python3[7633]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:02:08 np0005541545.novalocal sudo[7631]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:08 np0005541545.novalocal sudo[7657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmbzvfztnvztolwgwsyngtycusmypesh ; /usr/bin/python3'
Dec 01 20:02:08 np0005541545.novalocal sudo[7657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:08 np0005541545.novalocal python3[7659]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:02:08 np0005541545.novalocal sudo[7657]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:08 np0005541545.novalocal sudo[7683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmutmrfinpkgzllwfqgehlkuctlshnmo ; /usr/bin/python3'
Dec 01 20:02:08 np0005541545.novalocal sudo[7683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:09 np0005541545.novalocal python3[7685]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:02:09 np0005541545.novalocal sudo[7683]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:09 np0005541545.novalocal sudo[7709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkuoizcnslojsjrioeuwfoypjuuyjkyl ; /usr/bin/python3'
Dec 01 20:02:09 np0005541545.novalocal sudo[7709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:09 np0005541545.novalocal python3[7711]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:02:09 np0005541545.novalocal sudo[7709]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:09 np0005541545.novalocal sudo[7787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtndfwxkxjjafsggmrdmvadvrckgthvn ; /usr/bin/python3'
Dec 01 20:02:09 np0005541545.novalocal sudo[7787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:10 np0005541545.novalocal python3[7789]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:02:10 np0005541545.novalocal sudo[7787]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:10 np0005541545.novalocal sudo[7860]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txnfwxhhqinkjigtjuaagqkhrsxlpkwa ; /usr/bin/python3'
Dec 01 20:02:10 np0005541545.novalocal sudo[7860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:10 np0005541545.novalocal python3[7862]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764619329.797455-482-226135954686011/source _original_basename=tmpx22_uid4 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:02:10 np0005541545.novalocal sudo[7860]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:11 np0005541545.novalocal sudo[7910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkpwiytgenmheveliigyxgxlgnzuqnx ; /usr/bin/python3'
Dec 01 20:02:11 np0005541545.novalocal sudo[7910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:11 np0005541545.novalocal python3[7912]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:02:11 np0005541545.novalocal systemd[1]: Reloading.
Dec 01 20:02:11 np0005541545.novalocal systemd-rc-local-generator[7935]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:02:11 np0005541545.novalocal sudo[7910]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:12 np0005541545.novalocal sudo[7966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yabzffievgzawkgiqnbhqaxvjokxzbja ; /usr/bin/python3'
Dec 01 20:02:12 np0005541545.novalocal sudo[7966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:13 np0005541545.novalocal python3[7968]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 01 20:02:13 np0005541545.novalocal sudo[7966]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:13 np0005541545.novalocal sudo[7992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nloszvqshqkeowlbnhcgsbvklczwduwl ; /usr/bin/python3'
Dec 01 20:02:13 np0005541545.novalocal sudo[7992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:13 np0005541545.novalocal python3[7994]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:02:13 np0005541545.novalocal sudo[7992]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:13 np0005541545.novalocal sudo[8020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgntblyvoottvkzrzvwduprwmbrijqhh ; /usr/bin/python3'
Dec 01 20:02:13 np0005541545.novalocal sudo[8020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:13 np0005541545.novalocal python3[8022]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:02:13 np0005541545.novalocal sudo[8020]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:13 np0005541545.novalocal sudo[8048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvjiwgshyzfikimsuzibetppogjuoolj ; /usr/bin/python3'
Dec 01 20:02:13 np0005541545.novalocal sudo[8048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:13 np0005541545.novalocal python3[8050]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:02:13 np0005541545.novalocal sudo[8048]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:14 np0005541545.novalocal sudo[8076]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlnlgnfcxgzwlgyqfuzoybdcdqdhektp ; /usr/bin/python3'
Dec 01 20:02:14 np0005541545.novalocal sudo[8076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:14 np0005541545.novalocal python3[8078]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:02:14 np0005541545.novalocal sudo[8076]: pam_unix(sudo:session): session closed for user root
Dec 01 20:02:14 np0005541545.novalocal python3[8105]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-3697-de24-000000001cdb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:02:15 np0005541545.novalocal python3[8135]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:02:17 np0005541545.novalocal sshd-session[7552]: Connection closed by 38.102.83.114 port 40204
Dec 01 20:02:17 np0005541545.novalocal sshd-session[7549]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:02:17 np0005541545.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 01 20:02:17 np0005541545.novalocal systemd[1]: session-4.scope: Consumed 4.080s CPU time.
Dec 01 20:02:17 np0005541545.novalocal systemd-logind[796]: Session 4 logged out. Waiting for processes to exit.
Dec 01 20:02:17 np0005541545.novalocal systemd-logind[796]: Removed session 4.
Dec 01 20:02:18 np0005541545.novalocal sshd-session[8141]: Accepted publickey for zuul from 38.102.83.114 port 34220 ssh2: RSA SHA256:oog7MteKjkTJ4LxwhsVGQb4CwQfo2OF07ZrpoP0w1bM
Dec 01 20:02:18 np0005541545.novalocal systemd-logind[796]: New session 5 of user zuul.
Dec 01 20:02:18 np0005541545.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 01 20:02:18 np0005541545.novalocal sshd-session[8141]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:02:18 np0005541545.novalocal sudo[8168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmudzjdvbbnrqwiqrkijwudgvdvabws ; /usr/bin/python3'
Dec 01 20:02:18 np0005541545.novalocal sudo[8168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:02:19 np0005541545.novalocal python3[8170]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 01 20:03:02 np0005541545.novalocal sshd-session[8287]: Received disconnect from 193.46.255.99 port 60636:11:  [preauth]
Dec 01 20:03:02 np0005541545.novalocal sshd-session[8287]: Disconnected from authenticating user root 193.46.255.99 port 60636 [preauth]
Dec 01 20:03:12 np0005541545.novalocal sshd-session[8289]: Connection closed by authenticating user root 209.38.37.183 port 36574 [preauth]
Dec 01 20:03:56 np0005541545.novalocal sshd-session[8348]: Connection closed by authenticating user root 103.39.209.183 port 35756 [preauth]
Dec 01 20:03:58 np0005541545.novalocal sshd-session[8367]: Connection closed by authenticating user root 103.39.209.183 port 35758 [preauth]
Dec 01 20:04:00 np0005541545.novalocal sshd-session[8369]: Connection closed by authenticating user root 103.39.209.183 port 35774 [preauth]
Dec 01 20:04:03 np0005541545.novalocal sshd-session[8371]: Connection closed by authenticating user root 103.39.209.183 port 35782 [preauth]
Dec 01 20:04:04 np0005541545.novalocal sshd-session[8373]: Connection closed by authenticating user root 103.39.209.183 port 37226 [preauth]
Dec 01 20:04:06 np0005541545.novalocal sshd-session[8375]: Connection closed by authenticating user root 103.39.209.183 port 37242 [preauth]
Dec 01 20:04:08 np0005541545.novalocal sshd-session[8378]: Connection closed by authenticating user root 103.39.209.183 port 37254 [preauth]
Dec 01 20:04:11 np0005541545.novalocal sshd-session[8380]: Connection closed by authenticating user root 103.39.209.183 port 37260 [preauth]
Dec 01 20:04:14 np0005541545.novalocal sshd-session[8387]: Connection closed by authenticating user root 103.39.209.183 port 37270 [preauth]
Dec 01 20:04:15 np0005541545.novalocal sshd-session[8391]: Connection closed by authenticating user root 103.39.209.183 port 39522 [preauth]
Dec 01 20:04:18 np0005541545.novalocal sshd-session[8393]: Connection closed by authenticating user root 103.39.209.183 port 39526 [preauth]
Dec 01 20:04:21 np0005541545.novalocal sshd-session[8414]: Connection closed by authenticating user root 103.39.209.183 port 39540 [preauth]
Dec 01 20:04:24 np0005541545.novalocal sshd-session[8431]: Connection closed by authenticating user root 103.39.209.183 port 39552 [preauth]
Dec 01 20:04:25 np0005541545.novalocal sshd-session[8450]: Connection closed by authenticating user root 103.39.209.183 port 45914 [preauth]
Dec 01 20:04:27 np0005541545.novalocal sshd-session[8452]: Connection closed by authenticating user root 103.39.209.183 port 45918 [preauth]
Dec 01 20:04:30 np0005541545.novalocal sshd-session[8454]: Connection closed by authenticating user root 103.39.209.183 port 45934 [preauth]
Dec 01 20:04:35 np0005541545.novalocal sshd-session[8456]: Connection closed by authenticating user root 103.39.209.183 port 45938 [preauth]
Dec 01 20:04:38 np0005541545.novalocal sshd-session[8459]: Connection closed by authenticating user root 103.39.209.183 port 50298 [preauth]
Dec 01 20:04:40 np0005541545.novalocal sshd-session[8466]: Connection closed by authenticating user root 103.39.209.183 port 50312 [preauth]
Dec 01 20:04:41 np0005541545.novalocal sshd-session[8492]: Connection closed by authenticating user root 103.39.209.183 port 50318 [preauth]
Dec 01 20:04:43 np0005541545.novalocal sshd-session[8499]: Connection closed by authenticating user root 103.39.209.183 port 50326 [preauth]
Dec 01 20:04:44 np0005541545.novalocal sshd-session[8502]: Connection closed by authenticating user root 103.39.209.183 port 43500 [preauth]
Dec 01 20:04:47 np0005541545.novalocal sshd-session[8504]: Connection closed by authenticating user root 103.39.209.183 port 43508 [preauth]
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:04:49 np0005541545.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:04:51 np0005541545.novalocal sshd-session[8506]: Connection closed by authenticating user root 103.39.209.183 port 43524 [preauth]
Dec 01 20:04:53 np0005541545.novalocal sshd-session[8516]: Connection closed by authenticating user root 103.39.209.183 port 43528 [preauth]
Dec 01 20:04:56 np0005541545.novalocal sshd-session[8518]: Connection closed by authenticating user root 103.39.209.183 port 43540 [preauth]
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:04:58 np0005541545.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:05:00 np0005541545.novalocal sshd-session[8520]: Connection closed by authenticating user root 103.39.209.183 port 53244 [preauth]
Dec 01 20:05:02 np0005541545.novalocal sshd-session[8530]: Connection closed by authenticating user root 103.39.209.183 port 53258 [preauth]
Dec 01 20:05:04 np0005541545.novalocal sshd-session[8532]: Connection closed by authenticating user root 103.39.209.183 port 53266 [preauth]
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:05:06 np0005541545.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:05:08 np0005541545.novalocal setsebool[8545]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 01 20:05:08 np0005541545.novalocal setsebool[8545]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 01 20:05:08 np0005541545.novalocal sshd-session[8534]: Connection closed by authenticating user root 103.39.209.183 port 37106 [preauth]
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  Converting 392 SID table entries...
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:05:18 np0005541545.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:05:36 np0005541545.novalocal dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 01 20:05:36 np0005541545.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:05:36 np0005541545.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:05:36 np0005541545.novalocal systemd[1]: Reloading.
Dec 01 20:05:36 np0005541545.novalocal systemd-rc-local-generator[9293]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:05:36 np0005541545.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:05:38 np0005541545.novalocal sudo[8168]: pam_unix(sudo:session): session closed for user root
Dec 01 20:05:38 np0005541545.novalocal python3[10759]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-081f-3e06-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:05:39 np0005541545.novalocal kernel: evm: overlay not supported
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: Starting D-Bus User Message Bus...
Dec 01 20:05:39 np0005541545.novalocal dbus-broker-launch[12003]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 01 20:05:39 np0005541545.novalocal dbus-broker-launch[12003]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: Started D-Bus User Message Bus.
Dec 01 20:05:39 np0005541545.novalocal dbus-broker-lau[12003]: Ready
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: Created slice Slice /user.
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: podman-11800.scope: unit configures an IP firewall, but not running as root.
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: Started podman-11800.scope.
Dec 01 20:05:39 np0005541545.novalocal systemd[4299]: Started podman-pause-0ab94827.scope.
Dec 01 20:05:40 np0005541545.novalocal sudo[12463]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkclhmxducnkhrlvfbfgiekqlyqxejkt ; /usr/bin/python3'
Dec 01 20:05:40 np0005541545.novalocal sudo[12463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:05:40 np0005541545.novalocal python3[12494]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.18:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.18:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:05:40 np0005541545.novalocal python3[12494]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 01 20:05:40 np0005541545.novalocal sudo[12463]: pam_unix(sudo:session): session closed for user root
Dec 01 20:05:40 np0005541545.novalocal sshd-session[8144]: Connection closed by 38.102.83.114 port 34220
Dec 01 20:05:40 np0005541545.novalocal sshd-session[8141]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:05:40 np0005541545.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 01 20:05:40 np0005541545.novalocal systemd[1]: session-5.scope: Consumed 1min 12.574s CPU time.
Dec 01 20:05:40 np0005541545.novalocal systemd-logind[796]: Session 5 logged out. Waiting for processes to exit.
Dec 01 20:05:40 np0005541545.novalocal systemd-logind[796]: Removed session 5.
Dec 01 20:05:59 np0005541545.novalocal sshd-session[21594]: Unable to negotiate with 38.102.83.9 port 47298: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 01 20:05:59 np0005541545.novalocal sshd-session[21597]: Unable to negotiate with 38.102.83.9 port 47310: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 01 20:05:59 np0005541545.novalocal sshd-session[21595]: Connection closed by 38.102.83.9 port 47276 [preauth]
Dec 01 20:05:59 np0005541545.novalocal sshd-session[21601]: Unable to negotiate with 38.102.83.9 port 47288: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 01 20:05:59 np0005541545.novalocal sshd-session[21603]: Connection closed by 38.102.83.9 port 47282 [preauth]
Dec 01 20:06:01 np0005541545.novalocal anacron[7544]: Job `cron.daily' started
Dec 01 20:06:01 np0005541545.novalocal anacron[7544]: Job `cron.daily' terminated
Dec 01 20:06:03 np0005541545.novalocal sshd-session[23457]: Accepted publickey for zuul from 38.102.83.114 port 41100 ssh2: RSA SHA256:oog7MteKjkTJ4LxwhsVGQb4CwQfo2OF07ZrpoP0w1bM
Dec 01 20:06:03 np0005541545.novalocal systemd-logind[796]: New session 6 of user zuul.
Dec 01 20:06:03 np0005541545.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 01 20:06:03 np0005541545.novalocal sshd-session[23457]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:06:04 np0005541545.novalocal python3[23592]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNEk9Cut3shaAavNKansGjbrGEdxDuQD60tu4uiG73Sp7mT7RiWJ9eEgWidYF4I7Wr3zau/wn2H9FmId2ipuN9Q= zuul@np0005541544.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 20:06:04 np0005541545.novalocal sudo[23765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mugdejxyrwsqinilbrahtdbjwgtglrwr ; /usr/bin/python3'
Dec 01 20:06:04 np0005541545.novalocal sudo[23765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:06:04 np0005541545.novalocal python3[23777]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNEk9Cut3shaAavNKansGjbrGEdxDuQD60tu4uiG73Sp7mT7RiWJ9eEgWidYF4I7Wr3zau/wn2H9FmId2ipuN9Q= zuul@np0005541544.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 20:06:04 np0005541545.novalocal sudo[23765]: pam_unix(sudo:session): session closed for user root
Dec 01 20:06:05 np0005541545.novalocal sudo[24202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owhubcaxmypdjhuepwveredoawrgosil ; /usr/bin/python3'
Dec 01 20:06:05 np0005541545.novalocal sudo[24202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:06:05 np0005541545.novalocal python3[24212]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541545.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 01 20:06:05 np0005541545.novalocal useradd[24278]: new group: name=cloud-admin, GID=1002
Dec 01 20:06:05 np0005541545.novalocal useradd[24278]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 01 20:06:05 np0005541545.novalocal sudo[24202]: pam_unix(sudo:session): session closed for user root
Dec 01 20:06:05 np0005541545.novalocal sudo[24419]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inybiinyjqqmjmqhvtvxhsqktkduvdfn ; /usr/bin/python3'
Dec 01 20:06:05 np0005541545.novalocal sudo[24419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:06:05 np0005541545.novalocal python3[24427]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNEk9Cut3shaAavNKansGjbrGEdxDuQD60tu4uiG73Sp7mT7RiWJ9eEgWidYF4I7Wr3zau/wn2H9FmId2ipuN9Q= zuul@np0005541544.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 01 20:06:05 np0005541545.novalocal sudo[24419]: pam_unix(sudo:session): session closed for user root
Dec 01 20:06:06 np0005541545.novalocal sudo[24651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbmsudxolvqmryglroiilvvejgjwmnx ; /usr/bin/python3'
Dec 01 20:06:06 np0005541545.novalocal sudo[24651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:06:06 np0005541545.novalocal python3[24663]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:06:06 np0005541545.novalocal sudo[24651]: pam_unix(sudo:session): session closed for user root
Dec 01 20:06:06 np0005541545.novalocal sudo[24924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grslhzvnxexxqjxvbbggksaejzbntesl ; /usr/bin/python3'
Dec 01 20:06:06 np0005541545.novalocal sudo[24924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:06:06 np0005541545.novalocal python3[24935]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764619565.9130356-135-185566975831289/source _original_basename=tmp7q3fqi3d follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:06:06 np0005541545.novalocal sudo[24924]: pam_unix(sudo:session): session closed for user root
Dec 01 20:06:07 np0005541545.novalocal sudo[25221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygjcezqasvvhwhxkblxggsdcdlhscyr ; /usr/bin/python3'
Dec 01 20:06:07 np0005541545.novalocal sudo[25221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:06:07 np0005541545.novalocal python3[25230]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 01 20:06:07 np0005541545.novalocal systemd[1]: Starting Hostname Service...
Dec 01 20:06:07 np0005541545.novalocal systemd[1]: Started Hostname Service.
Dec 01 20:06:07 np0005541545.novalocal systemd-hostnamed[25327]: Changed pretty hostname to 'compute-0'
Dec 01 20:06:07 compute-0 systemd-hostnamed[25327]: Hostname set to <compute-0> (static)
Dec 01 20:06:07 compute-0 NetworkManager[7247]: <info>  [1764619567.6759] hostname: static hostname changed from "np0005541545.novalocal" to "compute-0"
Dec 01 20:06:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 01 20:06:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 01 20:06:07 compute-0 sudo[25221]: pam_unix(sudo:session): session closed for user root
Dec 01 20:06:07 compute-0 sshd-session[23524]: Connection closed by 38.102.83.114 port 41100
Dec 01 20:06:07 compute-0 sshd-session[23457]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:06:07 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 01 20:06:07 compute-0 systemd[1]: session-6.scope: Consumed 2.285s CPU time.
Dec 01 20:06:07 compute-0 systemd-logind[796]: Session 6 logged out. Waiting for processes to exit.
Dec 01 20:06:07 compute-0 systemd-logind[796]: Removed session 6.
Dec 01 20:06:17 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 01 20:06:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:06:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:06:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 57.329s CPU time.
Dec 01 20:06:23 compute-0 systemd[1]: run-r2b2f78250d5f4c478202cb0044a7a349.service: Deactivated successfully.
Dec 01 20:06:37 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 01 20:07:23 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 01 20:07:23 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 01 20:07:23 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 01 20:07:23 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 01 20:08:03 compute-0 sshd-session[30346]: Connection closed by authenticating user root 59.24.28.114 port 54740 [preauth]
Dec 01 20:08:04 compute-0 sshd-session[30348]: Connection closed by authenticating user root 59.24.28.114 port 56188 [preauth]
Dec 01 20:08:05 compute-0 sshd-session[30350]: Connection closed by authenticating user root 59.24.28.114 port 57538 [preauth]
Dec 01 20:08:06 compute-0 sshd-session[30352]: Connection closed by authenticating user root 59.24.28.114 port 58954 [preauth]
Dec 01 20:08:08 compute-0 sshd-session[30354]: Connection closed by authenticating user root 59.24.28.114 port 60286 [preauth]
Dec 01 20:08:09 compute-0 sshd-session[30356]: Connection closed by authenticating user root 59.24.28.114 port 33394 [preauth]
Dec 01 20:08:13 compute-0 sshd-session[30358]: Connection closed by authenticating user root 59.24.28.114 port 34930 [preauth]
Dec 01 20:08:14 compute-0 sshd-session[30360]: Connection closed by authenticating user root 59.24.28.114 port 39516 [preauth]
Dec 01 20:08:16 compute-0 sshd-session[30362]: Connection closed by authenticating user root 59.24.28.114 port 40796 [preauth]
Dec 01 20:09:05 compute-0 sshd-session[30366]: Received disconnect from 43.251.161.76 port 60118:11:  [preauth]
Dec 01 20:09:05 compute-0 sshd-session[30366]: Disconnected from authenticating user root 43.251.161.76 port 60118 [preauth]
Dec 01 20:10:02 compute-0 sshd-session[30368]: Accepted publickey for zuul from 38.102.83.9 port 34224 ssh2: RSA SHA256:oog7MteKjkTJ4LxwhsVGQb4CwQfo2OF07ZrpoP0w1bM
Dec 01 20:10:02 compute-0 systemd-logind[796]: New session 7 of user zuul.
Dec 01 20:10:02 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 01 20:10:02 compute-0 sshd-session[30368]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:10:03 compute-0 python3[30444]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:10:04 compute-0 sudo[30558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwgwcmujgimfekjzfhstfwuzzuwtrlcv ; /usr/bin/python3'
Dec 01 20:10:04 compute-0 sudo[30558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:04 compute-0 python3[30560]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:10:04 compute-0 sudo[30558]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:04 compute-0 sudo[30631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfuscnoiogfwjiskmbondedsmsndqae ; /usr/bin/python3'
Dec 01 20:10:04 compute-0 sudo[30631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:05 compute-0 python3[30633]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764619804.3098495-33889-9024161646800/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:10:05 compute-0 sudo[30631]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:05 compute-0 sudo[30657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-micevxvepwwufefpyqnmrktfgpzkpwif ; /usr/bin/python3'
Dec 01 20:10:05 compute-0 sudo[30657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:05 compute-0 python3[30659]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:10:05 compute-0 sudo[30657]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:05 compute-0 sudo[30730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orzropfswlwrleaeqzhhkdgxtrujikvu ; /usr/bin/python3'
Dec 01 20:10:05 compute-0 sudo[30730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:05 compute-0 python3[30732]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764619804.3098495-33889-9024161646800/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:10:05 compute-0 sudo[30730]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:05 compute-0 sudo[30756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkxrpeujqiczwplqkzpjjqbztvytqbi ; /usr/bin/python3'
Dec 01 20:10:05 compute-0 sudo[30756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:05 compute-0 python3[30758]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:10:05 compute-0 sudo[30756]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:06 compute-0 sudo[30829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtoivwlhavptouezwhhnwueobjfjdfmm ; /usr/bin/python3'
Dec 01 20:10:06 compute-0 sudo[30829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:06 compute-0 python3[30831]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764619804.3098495-33889-9024161646800/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:10:06 compute-0 sudo[30829]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:06 compute-0 sudo[30855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiwxznvduoqzeyicyjzqmctvkdvvbfzu ; /usr/bin/python3'
Dec 01 20:10:06 compute-0 sudo[30855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:06 compute-0 python3[30857]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:10:06 compute-0 sudo[30855]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:06 compute-0 sudo[30928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bydwxsgmigpuqlweusyitmfvirzppobp ; /usr/bin/python3'
Dec 01 20:10:06 compute-0 sudo[30928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:06 compute-0 python3[30930]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764619804.3098495-33889-9024161646800/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:10:06 compute-0 sudo[30928]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:06 compute-0 sudo[30954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swfiwnsqqbqqbgfgpsvnnzwwlsfzdzxv ; /usr/bin/python3'
Dec 01 20:10:06 compute-0 sudo[30954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:07 compute-0 python3[30956]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:10:07 compute-0 sudo[30954]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:07 compute-0 sudo[31027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vexoraxlmxyhvsxwltakpnarrunoojuu ; /usr/bin/python3'
Dec 01 20:10:07 compute-0 sudo[31027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:07 compute-0 python3[31029]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764619804.3098495-33889-9024161646800/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:10:07 compute-0 sudo[31027]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:07 compute-0 sudo[31053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuznjfwfxwilvchelzorfcvquiffwkcx ; /usr/bin/python3'
Dec 01 20:10:07 compute-0 sudo[31053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:07 compute-0 python3[31055]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:10:07 compute-0 sudo[31053]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:08 compute-0 sudo[31126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dswtjasdxknnwhxshtljmndkvjzhxosj ; /usr/bin/python3'
Dec 01 20:10:08 compute-0 sudo[31126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:08 compute-0 python3[31128]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764619804.3098495-33889-9024161646800/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:10:08 compute-0 sudo[31126]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:08 compute-0 sudo[31152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skokfhfjxskijflzeddbjkejsvhoeolz ; /usr/bin/python3'
Dec 01 20:10:08 compute-0 sudo[31152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:08 compute-0 python3[31154]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:10:08 compute-0 sudo[31152]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:08 compute-0 sudo[31225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asiykqvixkjrcooilluhigvibgpxltcs ; /usr/bin/python3'
Dec 01 20:10:08 compute-0 sudo[31225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:10:08 compute-0 python3[31227]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764619804.3098495-33889-9024161646800/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:10:08 compute-0 sudo[31225]: pam_unix(sudo:session): session closed for user root
Dec 01 20:10:11 compute-0 sshd-session[31253]: Unable to negotiate with 192.168.122.11 port 41768: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 01 20:10:11 compute-0 sshd-session[31252]: Connection closed by 192.168.122.11 port 41752 [preauth]
Dec 01 20:10:11 compute-0 sshd-session[31257]: Unable to negotiate with 192.168.122.11 port 41782: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 01 20:10:11 compute-0 sshd-session[31254]: Unable to negotiate with 192.168.122.11 port 41796: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 01 20:10:11 compute-0 sshd-session[31255]: Connection closed by 192.168.122.11 port 41764 [preauth]
Dec 01 20:10:21 compute-0 python3[31285]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:10:43 compute-0 sshd-session[31287]: Invalid user  from 8.210.108.204 port 56066
Dec 01 20:10:50 compute-0 sshd-session[31287]: Connection closed by invalid user  8.210.108.204 port 56066 [preauth]
Dec 01 20:11:35 compute-0 sudo[31290]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/vi delorean-antelope-testing.repo
Dec 01 20:11:35 compute-0 sudo[31290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:11:45 compute-0 sudo[31290]: pam_unix(sudo:session): session closed for user root
Dec 01 20:11:51 compute-0 sudo[31295]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/dnf repolist
Dec 01 20:11:51 compute-0 sudo[31295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:11:52 compute-0 sudo[31295]: pam_unix(sudo:session): session closed for user root
Dec 01 20:11:59 compute-0 sudo[31299]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/dnf install -y ceph-common
Dec 01 20:11:59 compute-0 sudo[31299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:12:09 compute-0 sudo[31299]: pam_unix(sudo:session): session closed for user root
Dec 01 20:12:19 compute-0 sudo[31382]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/dnf repolist
Dec 01 20:12:19 compute-0 sudo[31382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:12:19 compute-0 sudo[31382]: pam_unix(sudo:session): session closed for user root
Dec 01 20:12:28 compute-0 sudo[31385]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/dnf search blake3
Dec 01 20:12:28 compute-0 sudo[31385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:12:46 compute-0 sudo[31385]: pam_unix(sudo:session): session closed for user root
Dec 01 20:13:02 compute-0 sshd-session[6976]: Received disconnect from 93.44.176.199 port 41920:11: disconnected by user
Dec 01 20:13:02 compute-0 sshd-session[6976]: Disconnected from user zuul 93.44.176.199 port 41920
Dec 01 20:13:02 compute-0 sshd-session[6973]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:13:02 compute-0 systemd[1]: session-3.scope: Deactivated successfully.
Dec 01 20:13:02 compute-0 systemd[1]: session-3.scope: Consumed 26.453s CPU time.
Dec 01 20:13:02 compute-0 systemd-logind[796]: Session 3 logged out. Waiting for processes to exit.
Dec 01 20:13:02 compute-0 systemd-logind[796]: Removed session 3.
Dec 01 20:13:06 compute-0 sshd-session[31406]: Accepted publickey for zuul from 93.44.176.199 port 38658 ssh2: ED25519 SHA256:z36VxXeoDAIqjKJGq1nfXwPuvJNXq/XTSKoTDWY5Afk
Dec 01 20:13:06 compute-0 systemd-logind[796]: New session 8 of user zuul.
Dec 01 20:13:06 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 01 20:13:06 compute-0 sshd-session[31406]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:13:06 compute-0 systemd[1]: Starting Hostname Service...
Dec 01 20:13:06 compute-0 systemd[1]: Started Hostname Service.
Dec 01 20:13:36 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 01 20:15:21 compute-0 sshd-session[30371]: Received disconnect from 38.102.83.9 port 34224:11: disconnected by user
Dec 01 20:15:21 compute-0 sshd-session[30371]: Disconnected from user zuul 38.102.83.9 port 34224
Dec 01 20:15:21 compute-0 sshd-session[30368]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:15:21 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 01 20:15:21 compute-0 systemd[1]: session-7.scope: Consumed 5.104s CPU time.
Dec 01 20:15:21 compute-0 systemd-logind[796]: Session 7 logged out. Waiting for processes to exit.
Dec 01 20:15:21 compute-0 systemd-logind[796]: Removed session 7.
Dec 01 20:15:38 compute-0 sshd-session[31447]: Received disconnect from 101.36.224.146 port 59734:11:  [preauth]
Dec 01 20:15:38 compute-0 sshd-session[31447]: Disconnected from authenticating user root 101.36.224.146 port 59734 [preauth]
Dec 01 20:18:10 compute-0 sshd-session[31450]: Received disconnect from 193.46.255.244 port 33744:11:  [preauth]
Dec 01 20:18:10 compute-0 sshd-session[31450]: Disconnected from authenticating user root 193.46.255.244 port 33744 [preauth]
Dec 01 20:21:41 compute-0 sshd-session[31455]: Accepted publickey for zuul from 192.168.122.30 port 52630 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:21:41 compute-0 systemd-logind[796]: New session 9 of user zuul.
Dec 01 20:21:41 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 01 20:21:41 compute-0 sshd-session[31455]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:21:42 compute-0 python3.9[31608]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:21:43 compute-0 sudo[31787]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njdykyuudzgjjlzbbcsiqjgqibafoygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620502.8775682-32-143342608181962/AnsiballZ_command.py'
Dec 01 20:21:43 compute-0 sudo[31787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:21:43 compute-0 python3.9[31789]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:21:55 compute-0 sudo[31787]: pam_unix(sudo:session): session closed for user root
Dec 01 20:21:56 compute-0 sshd-session[31458]: Connection closed by 192.168.122.30 port 52630
Dec 01 20:21:56 compute-0 sshd-session[31455]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:21:56 compute-0 systemd-logind[796]: Session 9 logged out. Waiting for processes to exit.
Dec 01 20:21:56 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 01 20:21:56 compute-0 systemd[1]: session-9.scope: Consumed 8.090s CPU time.
Dec 01 20:21:56 compute-0 systemd-logind[796]: Removed session 9.
Dec 01 20:22:11 compute-0 sshd-session[31847]: Accepted publickey for zuul from 192.168.122.30 port 33500 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:22:11 compute-0 systemd-logind[796]: New session 10 of user zuul.
Dec 01 20:22:11 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 01 20:22:11 compute-0 sshd-session[31847]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:22:12 compute-0 python3.9[32000]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 01 20:22:13 compute-0 python3.9[32174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:22:14 compute-0 sudo[32324]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eveuobhmypdwtwpclwbspcaaelbwhsjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620533.869933-45-260884300716176/AnsiballZ_command.py'
Dec 01 20:22:14 compute-0 sudo[32324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:14 compute-0 python3.9[32326]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:22:14 compute-0 sudo[32324]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:15 compute-0 sudo[32477]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kagvwdkpadfkkvvpidiprnshcnaidxfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620534.8674123-57-128003098174797/AnsiballZ_stat.py'
Dec 01 20:22:15 compute-0 sudo[32477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:15 compute-0 python3.9[32479]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:22:15 compute-0 sudo[32477]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:16 compute-0 sudo[32629]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccpyjtuavgsyijigdylazcwxlanfcvtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620535.7369502-65-218676462854223/AnsiballZ_file.py'
Dec 01 20:22:16 compute-0 sudo[32629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:16 compute-0 python3.9[32631]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:22:16 compute-0 sudo[32629]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:17 compute-0 sudo[32782]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziuhfetlyuxkmaqjvgdpwnbvovkedlyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620536.697887-73-4482415346981/AnsiballZ_stat.py'
Dec 01 20:22:17 compute-0 sudo[32782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:17 compute-0 python3.9[32784]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:22:17 compute-0 sudo[32782]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:17 compute-0 sudo[32905]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svzyobeporfwuapofembslwkmynrjpkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620536.697887-73-4482415346981/AnsiballZ_copy.py'
Dec 01 20:22:17 compute-0 sudo[32905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:18 compute-0 python3.9[32907]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620536.697887-73-4482415346981/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:22:18 compute-0 sudo[32905]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:18 compute-0 sudo[33057]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwzyyppwcreohoqcesimyxgbopnacxnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620538.2918653-88-229215883994116/AnsiballZ_setup.py'
Dec 01 20:22:18 compute-0 sudo[33057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:18 compute-0 python3.9[33059]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:22:19 compute-0 sudo[33057]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:19 compute-0 sudo[33213]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfthshxfqffzltariemxkfsksnwzuyzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620539.3088386-96-70092156163973/AnsiballZ_file.py'
Dec 01 20:22:19 compute-0 sudo[33213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:19 compute-0 python3.9[33215]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:22:19 compute-0 sudo[33213]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:20 compute-0 sudo[33365]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ompfywxevtzyaecwvptuxlpjcvxnhnln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620540.063054-105-155521605647289/AnsiballZ_file.py'
Dec 01 20:22:20 compute-0 sudo[33365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:20 compute-0 python3.9[33367]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:22:20 compute-0 sudo[33365]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:21 compute-0 python3.9[33517]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:22:24 compute-0 python3.9[33771]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:22:25 compute-0 python3.9[33921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:22:26 compute-0 python3.9[34075]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:22:27 compute-0 sudo[34231]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccaexhbzthkupduykulkyxijnjiphbwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620547.1223612-153-3180579167975/AnsiballZ_setup.py'
Dec 01 20:22:27 compute-0 sudo[34231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:27 compute-0 python3.9[34233]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:22:28 compute-0 sudo[34231]: pam_unix(sudo:session): session closed for user root
Dec 01 20:22:28 compute-0 sudo[34315]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pczdnzirfafkbzdiilrvdiraqkgkojvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620547.1223612-153-3180579167975/AnsiballZ_dnf.py'
Dec 01 20:22:28 compute-0 sudo[34315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:22:28 compute-0 python3.9[34317]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:23:57 compute-0 systemd[1]: Reloading.
Dec 01 20:23:57 compute-0 systemd-rc-local-generator[34662]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:23:57 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 01 20:23:57 compute-0 systemd[1]: Reloading.
Dec 01 20:23:57 compute-0 systemd-rc-local-generator[34698]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:23:57 compute-0 systemd[1]: Starting dnf makecache...
Dec 01 20:23:57 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 01 20:23:57 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 01 20:23:57 compute-0 systemd[1]: Reloading.
Dec 01 20:23:57 compute-0 systemd-rc-local-generator[34742]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:23:58 compute-0 dnf[34710]: Failed determining last makecache time.
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-barbican-42b4c41831408a8e323 153 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 169 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-cinder-1c00d6490d88e436f26ef 159 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-python-stevedore-c4acc5639fd2329372142 163 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-python-cloudkitty-tests-tempest-2c80f8 156 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 167 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 153 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-python-designate-tests-tempest-347fdbc 156 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-glance-1fd12c29b339f30fe823e 138 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 133 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-manila-3c01b7181572c95dac462 151 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-python-whitebox-neutron-tests-tempest- 151 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-octavia-ba397f07a7331190208c 130 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-watcher-c014f81a8647287f6dcc 137 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec 01 20:23:58 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec 01 20:23:58 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-ansible-config_template-5ccaa22121a7ff 157 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 165 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-swift-dc98a8463506ac520c469a 125 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-python-tempestconf-8515371b7cceebd4282 135 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: delorean-openstack-heat-ui-013accbfd179753bc3f0 127 kB/s | 3.0 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: CentOS Stream 9 - BaseOS                         73 kB/s | 7.3 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: CentOS Stream 9 - AppStream                      73 kB/s | 7.4 kB     00:00
Dec 01 20:23:58 compute-0 dnf[34710]: CentOS Stream 9 - CRB                            74 kB/s | 7.2 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: CentOS Stream 9 - Extras packages                23 kB/s | 8.3 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: dlrn-antelope-testing                           102 kB/s | 3.0 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: dlrn-antelope-build-deps                        107 kB/s | 3.0 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: centos9-rabbitmq                                106 kB/s | 3.0 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: centos9-storage                                 113 kB/s | 3.0 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: centos9-opstools                                134 kB/s | 3.0 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: NFV SIG OpenvSwitch                             131 kB/s | 3.0 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: repo-setup-centos-appstream                     152 kB/s | 4.4 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: repo-setup-centos-baseos                        160 kB/s | 3.9 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: repo-setup-centos-highavailability              156 kB/s | 3.9 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: repo-setup-centos-powertools                    179 kB/s | 4.3 kB     00:00
Dec 01 20:23:59 compute-0 dnf[34710]: Extra Packages for Enterprise Linux 9 - x86_64  224 kB/s |  31 kB     00:00
Dec 01 20:24:00 compute-0 dnf[34710]: Metadata cache created.
Dec 01 20:24:00 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 01 20:24:00 compute-0 systemd[1]: Finished dnf makecache.
Dec 01 20:24:00 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.760s CPU time.
Dec 01 20:25:01 compute-0 kernel: SELinux:  Converting 2720 SID table entries...
Dec 01 20:25:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:25:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 01 20:25:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:25:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:25:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:25:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:25:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:25:01 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 01 20:25:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:25:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:25:01 compute-0 systemd[1]: Reloading.
Dec 01 20:25:01 compute-0 systemd-rc-local-generator[35123]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:25:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:25:02 compute-0 sudo[34315]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:02 compute-0 sudo[36032]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmpwaxgjgxdbnvlykekckubzyckpuwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620702.5806406-165-42265329444339/AnsiballZ_command.py'
Dec 01 20:25:02 compute-0 sudo[36032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:25:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:25:02 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.273s CPU time.
Dec 01 20:25:02 compute-0 systemd[1]: run-rd09d202780014470ae619ca9e9dad64c.service: Deactivated successfully.
Dec 01 20:25:03 compute-0 python3.9[36035]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:25:03 compute-0 sudo[36032]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:04 compute-0 sudo[36314]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpdsogujnmircsbjoldusqwscdyfyolp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620704.1320422-173-187348871364520/AnsiballZ_selinux.py'
Dec 01 20:25:04 compute-0 sudo[36314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:05 compute-0 python3.9[36316]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 01 20:25:05 compute-0 sudo[36314]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:05 compute-0 sudo[36466]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtjzqenkevipnaimhzdnggaznpfldarv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620705.5756547-184-156450960902385/AnsiballZ_command.py'
Dec 01 20:25:05 compute-0 sudo[36466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:06 compute-0 python3.9[36468]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 01 20:25:07 compute-0 sudo[36466]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:07 compute-0 sudo[36619]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijbqgihobmpmzkadtfajyazddxvbkcoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620707.3117049-192-117786730216865/AnsiballZ_file.py'
Dec 01 20:25:07 compute-0 sudo[36619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:08 compute-0 python3.9[36621]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:25:08 compute-0 sudo[36619]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:09 compute-0 sudo[36771]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvssrvrtndhbqteqrgoyyhwmrdmaejmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620708.822623-200-135900199535855/AnsiballZ_mount.py'
Dec 01 20:25:09 compute-0 sudo[36771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:09 compute-0 python3.9[36773]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 01 20:25:09 compute-0 sudo[36771]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:10 compute-0 sudo[36923]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwkprnnokpohjwwgzecrdhcjyudiyfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620710.3741386-228-156910453574812/AnsiballZ_file.py'
Dec 01 20:25:10 compute-0 sudo[36923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:10 compute-0 python3.9[36925]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:25:10 compute-0 sudo[36923]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:11 compute-0 sudo[37075]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigtipifofvcrjywjmydasllzsgizpln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620711.0970597-236-265479217317422/AnsiballZ_stat.py'
Dec 01 20:25:11 compute-0 sudo[37075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:11 compute-0 python3.9[37077]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:25:11 compute-0 sudo[37075]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:11 compute-0 sudo[37198]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpxyryyvyzgkmnpxoaozkomxqzcuqtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620711.0970597-236-265479217317422/AnsiballZ_copy.py'
Dec 01 20:25:11 compute-0 sudo[37198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:12 compute-0 python3.9[37200]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620711.0970597-236-265479217317422/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3df024ac0733db6e5f9a52fcc7729417ba9442f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:25:12 compute-0 sudo[37198]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:12 compute-0 sudo[37350]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgpfbrfhxysnnzeemeqzjoqhtokbtnjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620712.667888-260-156854866833732/AnsiballZ_stat.py'
Dec 01 20:25:12 compute-0 sudo[37350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:16 compute-0 python3.9[37352]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:25:16 compute-0 sudo[37350]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:16 compute-0 sudo[37502]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faykocwfywdamswtldntrcoritpgosye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620716.4107318-268-134099802084829/AnsiballZ_command.py'
Dec 01 20:25:16 compute-0 sudo[37502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:16 compute-0 python3.9[37504]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:25:17 compute-0 sudo[37502]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:17 compute-0 sudo[37655]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydblgapfxduwbmbvhvdzdviuiurfolis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620717.2090628-276-142011227573975/AnsiballZ_file.py'
Dec 01 20:25:17 compute-0 sudo[37655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:17 compute-0 python3.9[37657]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:25:17 compute-0 sudo[37655]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:18 compute-0 sudo[37807]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlvipmmbbmluyujclufkqeiddsnkcdvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620718.0855973-287-254968669876815/AnsiballZ_getent.py'
Dec 01 20:25:18 compute-0 sudo[37807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:18 compute-0 python3.9[37809]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 01 20:25:18 compute-0 sudo[37807]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:18 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:25:19 compute-0 sudo[37961]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dabpayxwzazaflmakvcjldjrdrqezbbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620718.9168568-295-209465760105257/AnsiballZ_group.py'
Dec 01 20:25:19 compute-0 sudo[37961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:19 compute-0 python3.9[37963]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 01 20:25:19 compute-0 groupadd[37964]: group added to /etc/group: name=qemu, GID=107
Dec 01 20:25:19 compute-0 groupadd[37964]: group added to /etc/gshadow: name=qemu
Dec 01 20:25:19 compute-0 groupadd[37964]: new group: name=qemu, GID=107
Dec 01 20:25:19 compute-0 sudo[37961]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:20 compute-0 sudo[38119]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqxcsysbmpnuycjkeuitdwkuvsiyfvpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620719.8959217-303-63387604040184/AnsiballZ_user.py'
Dec 01 20:25:20 compute-0 sudo[38119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:20 compute-0 python3.9[38121]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 01 20:25:20 compute-0 useradd[38123]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/1
Dec 01 20:25:20 compute-0 sudo[38119]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:21 compute-0 sudo[38279]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdafeyykfgdtmaxhzbnbzjaeyoougrpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620720.8036792-311-4933793145609/AnsiballZ_getent.py'
Dec 01 20:25:21 compute-0 sudo[38279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:21 compute-0 python3.9[38281]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 01 20:25:21 compute-0 sudo[38279]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:21 compute-0 sudo[38432]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbazktxdrcxjlocxiruscwwffhcjntnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620721.5335703-319-3685119774668/AnsiballZ_group.py'
Dec 01 20:25:21 compute-0 sudo[38432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:22 compute-0 python3.9[38434]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 01 20:25:22 compute-0 groupadd[38435]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 01 20:25:22 compute-0 groupadd[38435]: group added to /etc/gshadow: name=hugetlbfs
Dec 01 20:25:22 compute-0 groupadd[38435]: new group: name=hugetlbfs, GID=42477
Dec 01 20:25:22 compute-0 sudo[38432]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:22 compute-0 sudo[38590]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lijodgibwuibenyjrgcordjvpzhwvdoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620722.3277504-328-278535100719447/AnsiballZ_file.py'
Dec 01 20:25:22 compute-0 sudo[38590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:22 compute-0 python3.9[38592]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 01 20:25:22 compute-0 sudo[38590]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:23 compute-0 sudo[38742]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hthyhqbjbiomlyywvvmpdhwwrvgnpump ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620723.1365619-339-212828031677956/AnsiballZ_dnf.py'
Dec 01 20:25:23 compute-0 sudo[38742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:23 compute-0 python3.9[38744]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:25:25 compute-0 sudo[38742]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:25 compute-0 sudo[38895]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfhwegshmxgburwtnitqndyevptycsck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620725.4462588-347-190388195716326/AnsiballZ_file.py'
Dec 01 20:25:25 compute-0 sudo[38895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:25 compute-0 python3.9[38897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:25:25 compute-0 sudo[38895]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:26 compute-0 sudo[39047]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylrdggbrkdhooudtvlripupqqcggswbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620726.0427701-355-142235073117387/AnsiballZ_stat.py'
Dec 01 20:25:26 compute-0 sudo[39047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:26 compute-0 python3.9[39049]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:25:26 compute-0 sudo[39047]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:26 compute-0 sudo[39170]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evqhlmgvdjxlxwffnuyzkjakdgvhtxuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620726.0427701-355-142235073117387/AnsiballZ_copy.py'
Dec 01 20:25:26 compute-0 sudo[39170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:27 compute-0 python3.9[39172]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764620726.0427701-355-142235073117387/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:25:27 compute-0 sudo[39170]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:27 compute-0 sudo[39322]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpbvkpuohzlbamelquyjwdkxfqaggmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620727.2129622-370-233503023842552/AnsiballZ_systemd.py'
Dec 01 20:25:27 compute-0 sudo[39322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:28 compute-0 python3.9[39324]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:25:28 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 01 20:25:28 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 01 20:25:28 compute-0 kernel: Bridge firewalling registered
Dec 01 20:25:28 compute-0 systemd-modules-load[39328]: Inserted module 'br_netfilter'
Dec 01 20:25:28 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 01 20:25:28 compute-0 sudo[39322]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:28 compute-0 sudo[39481]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-donmnxuoarpokorwijsijodgkmkyldgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620728.356322-378-178111826137347/AnsiballZ_stat.py'
Dec 01 20:25:28 compute-0 sudo[39481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:28 compute-0 python3.9[39483]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:25:28 compute-0 sudo[39481]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:29 compute-0 sudo[39604]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejkssbzjgsrpqscfxqqopiqebbmqmbba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620728.356322-378-178111826137347/AnsiballZ_copy.py'
Dec 01 20:25:29 compute-0 sudo[39604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:29 compute-0 python3.9[39606]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764620728.356322-378-178111826137347/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:25:29 compute-0 sudo[39604]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:29 compute-0 sudo[39756]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vptdndkmqxxlggcevebtqquahdafyvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620729.6856124-396-220662022464028/AnsiballZ_dnf.py'
Dec 01 20:25:29 compute-0 sudo[39756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:30 compute-0 python3.9[39758]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:25:40 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec 01 20:25:40 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec 01 20:25:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:25:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:25:40 compute-0 systemd[1]: Reloading.
Dec 01 20:25:41 compute-0 systemd-rc-local-generator[39842]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:25:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:25:41 compute-0 sudo[39756]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:42 compute-0 python3.9[41079]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:25:43 compute-0 python3.9[41962]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 01 20:25:44 compute-0 python3.9[42680]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:25:44 compute-0 sudo[43511]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbadbxgwkjximdmfhvhbvqfeuasbkuae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620744.3936942-435-27879891644275/AnsiballZ_command.py'
Dec 01 20:25:44 compute-0 sudo[43511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:44 compute-0 python3.9[43528]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:25:44 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 01 20:25:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:25:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:25:45 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.161s CPU time.
Dec 01 20:25:45 compute-0 systemd[1]: run-rfb91231447fa44899c462474c21a5ab2.service: Deactivated successfully.
Dec 01 20:25:45 compute-0 systemd[1]: Starting Authorization Manager...
Dec 01 20:25:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 01 20:25:45 compute-0 polkitd[44197]: Started polkitd version 0.117
Dec 01 20:25:45 compute-0 polkitd[44197]: Loading rules from directory /etc/polkit-1/rules.d
Dec 01 20:25:45 compute-0 polkitd[44197]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 01 20:25:45 compute-0 polkitd[44197]: Finished loading, compiling and executing 2 rules
Dec 01 20:25:45 compute-0 systemd[1]: Started Authorization Manager.
Dec 01 20:25:45 compute-0 polkitd[44197]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 01 20:25:45 compute-0 sudo[43511]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:46 compute-0 sudo[44365]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnjuzchevelyjnxfwaxbkvqticfkuvth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620745.717026-444-217629613746661/AnsiballZ_systemd.py'
Dec 01 20:25:46 compute-0 sudo[44365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:46 compute-0 python3.9[44367]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:25:46 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 01 20:25:46 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 01 20:25:46 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 01 20:25:46 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 01 20:25:46 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 01 20:25:46 compute-0 sudo[44365]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:47 compute-0 python3.9[44528]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 01 20:25:49 compute-0 sudo[44678]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqnoxtljhszjprlwygvrbwoscsxmashc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620749.21918-501-19722830513327/AnsiballZ_systemd.py'
Dec 01 20:25:49 compute-0 sudo[44678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:49 compute-0 python3.9[44680]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:25:49 compute-0 systemd[1]: Reloading.
Dec 01 20:25:49 compute-0 systemd-rc-local-generator[44710]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:25:50 compute-0 sudo[44678]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:50 compute-0 sudo[44867]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnpumducemhmnooutkkqodrikaeeyakx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620750.2323735-501-143570262973678/AnsiballZ_systemd.py'
Dec 01 20:25:50 compute-0 sudo[44867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:50 compute-0 python3.9[44869]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:25:50 compute-0 systemd[1]: Reloading.
Dec 01 20:25:50 compute-0 systemd-rc-local-generator[44899]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:25:51 compute-0 sudo[44867]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:51 compute-0 sudo[45056]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciaaopovkpgiwngeiehevyfiomyfoqin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620751.2813408-517-218345100724485/AnsiballZ_command.py'
Dec 01 20:25:51 compute-0 sudo[45056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:51 compute-0 python3.9[45058]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:25:51 compute-0 sudo[45056]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:52 compute-0 sudo[45209]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwbdiasmwmbyupeiwerdqwbjuasubstn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620752.0224605-525-188939643817258/AnsiballZ_command.py'
Dec 01 20:25:52 compute-0 sudo[45209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:52 compute-0 python3.9[45211]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:25:52 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 01 20:25:52 compute-0 sudo[45209]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:53 compute-0 sudo[45362]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roeqdujpgfjeptmijmuimzohfhespieh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620752.7147102-533-22532559792470/AnsiballZ_command.py'
Dec 01 20:25:53 compute-0 sudo[45362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:53 compute-0 python3.9[45364]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:25:54 compute-0 sudo[45362]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:55 compute-0 sudo[45524]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asdkzkxesobttqdefxqmeexfjwrgnwxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620754.844704-541-273734003610126/AnsiballZ_command.py'
Dec 01 20:25:55 compute-0 sudo[45524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:55 compute-0 python3.9[45526]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:25:56 compute-0 sudo[45524]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:57 compute-0 sudo[45677]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmzcbhbjsxkcaxtdnnujttmgaeakiuur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620756.6438816-549-221108386186840/AnsiballZ_systemd.py'
Dec 01 20:25:57 compute-0 sudo[45677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:25:57 compute-0 python3.9[45679]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:25:57 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 01 20:25:57 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 01 20:25:57 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 01 20:25:57 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 01 20:25:57 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 01 20:25:57 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 01 20:25:57 compute-0 sudo[45677]: pam_unix(sudo:session): session closed for user root
Dec 01 20:25:57 compute-0 sshd-session[31850]: Connection closed by 192.168.122.30 port 33500
Dec 01 20:25:57 compute-0 sshd-session[31847]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:25:57 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 01 20:25:57 compute-0 systemd[1]: session-10.scope: Consumed 2min 17.874s CPU time.
Dec 01 20:25:57 compute-0 systemd-logind[796]: Session 10 logged out. Waiting for processes to exit.
Dec 01 20:25:57 compute-0 systemd-logind[796]: Removed session 10.
Dec 01 20:26:01 compute-0 anacron[7544]: Job `cron.weekly' started
Dec 01 20:26:01 compute-0 anacron[7544]: Job `cron.weekly' terminated
Dec 01 20:26:02 compute-0 sshd-session[45711]: Accepted publickey for zuul from 192.168.122.30 port 55730 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:26:02 compute-0 systemd-logind[796]: New session 11 of user zuul.
Dec 01 20:26:02 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 01 20:26:02 compute-0 sshd-session[45711]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:26:03 compute-0 python3.9[45864]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:26:04 compute-0 sudo[46018]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqsqdsgkgfpjduqwdjcekogdllrcbzxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620764.4652386-36-211328840471902/AnsiballZ_getent.py'
Dec 01 20:26:04 compute-0 sudo[46018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:05 compute-0 python3.9[46020]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 01 20:26:05 compute-0 sudo[46018]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:05 compute-0 sudo[46171]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vybaysxwtvzjpiyodnuqvvwaneyajnop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620765.317598-44-235827172155707/AnsiballZ_group.py'
Dec 01 20:26:05 compute-0 sudo[46171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:06 compute-0 python3.9[46173]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 01 20:26:06 compute-0 groupadd[46174]: group added to /etc/group: name=openvswitch, GID=42476
Dec 01 20:26:06 compute-0 groupadd[46174]: group added to /etc/gshadow: name=openvswitch
Dec 01 20:26:06 compute-0 groupadd[46174]: new group: name=openvswitch, GID=42476
Dec 01 20:26:06 compute-0 sudo[46171]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:06 compute-0 sudo[46329]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiddxzotzikjbmamuydkgiyvekwqvlbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620766.3875237-52-142393348683516/AnsiballZ_user.py'
Dec 01 20:26:06 compute-0 sudo[46329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:07 compute-0 python3.9[46331]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 01 20:26:07 compute-0 useradd[46333]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/1
Dec 01 20:26:07 compute-0 useradd[46333]: add 'openvswitch' to group 'hugetlbfs'
Dec 01 20:26:07 compute-0 useradd[46333]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 01 20:26:07 compute-0 sudo[46329]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:07 compute-0 sudo[46489]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnzlrmrlcigzwhhpizzivwelbhjxzzzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620767.5940933-62-218460843218307/AnsiballZ_setup.py'
Dec 01 20:26:07 compute-0 sudo[46489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:08 compute-0 python3.9[46491]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:26:08 compute-0 sudo[46489]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:08 compute-0 sudo[46573]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gryaeecmlzuskafnjfllltzvohjcsipf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620767.5940933-62-218460843218307/AnsiballZ_dnf.py'
Dec 01 20:26:08 compute-0 sudo[46573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:09 compute-0 python3.9[46575]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 01 20:26:11 compute-0 sudo[46573]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:11 compute-0 sudo[46736]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnfbghnlqktxcirzxhcyiybodpxgimlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620771.4972618-76-258678479378432/AnsiballZ_dnf.py'
Dec 01 20:26:11 compute-0 sudo[46736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:12 compute-0 python3.9[46738]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:26:22 compute-0 kernel: SELinux:  Converting 2732 SID table entries...
Dec 01 20:26:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:26:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 01 20:26:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:26:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:26:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:26:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:26:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:26:22 compute-0 groupadd[46761]: group added to /etc/group: name=unbound, GID=993
Dec 01 20:26:22 compute-0 groupadd[46761]: group added to /etc/gshadow: name=unbound
Dec 01 20:26:22 compute-0 groupadd[46761]: new group: name=unbound, GID=993
Dec 01 20:26:22 compute-0 useradd[46768]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 01 20:26:23 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 01 20:26:23 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 01 20:26:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:26:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:26:24 compute-0 systemd[1]: Reloading.
Dec 01 20:26:24 compute-0 systemd-rc-local-generator[47267]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:26:24 compute-0 systemd-sysv-generator[47270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:26:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:26:25 compute-0 sudo[46736]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:26:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:26:25 compute-0 systemd[1]: run-r8cf66be3c7644c1094abaed1d8739400.service: Deactivated successfully.
Dec 01 20:26:26 compute-0 sudo[47834]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrskhpkdilmjudmfvmbgaazxgpiskqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620785.4869807-84-15378495244775/AnsiballZ_systemd.py'
Dec 01 20:26:26 compute-0 sudo[47834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:26 compute-0 python3.9[47836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:26:26 compute-0 systemd[1]: Reloading.
Dec 01 20:26:26 compute-0 systemd-sysv-generator[47870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:26:26 compute-0 systemd-rc-local-generator[47863]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:26:26 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 01 20:26:26 compute-0 chown[47878]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 01 20:26:26 compute-0 ovs-ctl[47883]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 01 20:26:26 compute-0 ovs-ctl[47883]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 01 20:26:26 compute-0 ovs-ctl[47883]: Starting ovsdb-server [  OK  ]
Dec 01 20:26:26 compute-0 ovs-vsctl[47932]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 01 20:26:27 compute-0 ovs-vsctl[47952]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"84a1d907-d341-4608-b17a-1f738619ea16\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 01 20:26:27 compute-0 ovs-ctl[47883]: Configuring Open vSwitch system IDs [  OK  ]
Dec 01 20:26:27 compute-0 ovs-vsctl[47958]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 01 20:26:27 compute-0 ovs-ctl[47883]: Enabling remote OVSDB managers [  OK  ]
Dec 01 20:26:27 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 01 20:26:27 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 01 20:26:27 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 01 20:26:27 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 01 20:26:27 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 01 20:26:27 compute-0 ovs-ctl[48003]: Inserting openvswitch module [  OK  ]
Dec 01 20:26:27 compute-0 ovs-ctl[47972]: Starting ovs-vswitchd [  OK  ]
Dec 01 20:26:27 compute-0 ovs-vsctl[48020]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 01 20:26:27 compute-0 ovs-ctl[47972]: Enabling remote OVSDB managers [  OK  ]
Dec 01 20:26:27 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 01 20:26:27 compute-0 systemd[1]: Starting Open vSwitch...
Dec 01 20:26:27 compute-0 systemd[1]: Finished Open vSwitch.
Dec 01 20:26:27 compute-0 sudo[47834]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:28 compute-0 python3.9[48172]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:26:29 compute-0 sudo[48322]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjohttmdutpuadymkersydnzlsetwndr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620788.6605299-102-52909727115287/AnsiballZ_sefcontext.py'
Dec 01 20:26:29 compute-0 sudo[48322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:29 compute-0 python3.9[48324]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 01 20:26:30 compute-0 kernel: SELinux:  Converting 2746 SID table entries...
Dec 01 20:26:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:26:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 01 20:26:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:26:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:26:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:26:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:26:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:26:30 compute-0 sudo[48322]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:31 compute-0 python3.9[48479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:26:32 compute-0 sudo[48635]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdzathpqqphutffouwzqxhwxqkubiqsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620791.8790436-120-122356731015038/AnsiballZ_dnf.py'
Dec 01 20:26:32 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 01 20:26:32 compute-0 sudo[48635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:32 compute-0 python3.9[48637]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:26:33 compute-0 sudo[48635]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:34 compute-0 sudo[48788]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txqvejlvuplqqqwpdmruzxjqkqwsrmzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620793.7884529-128-253302178680395/AnsiballZ_command.py'
Dec 01 20:26:34 compute-0 sudo[48788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:34 compute-0 python3.9[48790]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:26:35 compute-0 sudo[48788]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:35 compute-0 sudo[49075]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrqgkhejjavibhjqiecqvepqjedmzlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620795.3404098-136-265917442774618/AnsiballZ_file.py'
Dec 01 20:26:35 compute-0 sudo[49075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:35 compute-0 python3.9[49077]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 01 20:26:35 compute-0 sudo[49075]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:36 compute-0 python3.9[49227]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:26:37 compute-0 sudo[49379]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fytwbfluddqrmvzelwcioisxrokormxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620796.8594012-152-227559685791390/AnsiballZ_dnf.py'
Dec 01 20:26:37 compute-0 sudo[49379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:37 compute-0 python3.9[49381]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:26:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:26:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:26:38 compute-0 systemd[1]: Reloading.
Dec 01 20:26:39 compute-0 systemd-rc-local-generator[49420]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:26:39 compute-0 systemd-sysv-generator[49425]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:26:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:26:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:26:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:26:39 compute-0 systemd[1]: run-rab37789689db486f95d3e48702b54278.service: Deactivated successfully.
Dec 01 20:26:39 compute-0 sudo[49379]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:40 compute-0 sudo[49695]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgirkewtmsnzqednovrmvrnogicqzllw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620799.815216-160-173766111366130/AnsiballZ_systemd.py'
Dec 01 20:26:40 compute-0 sudo[49695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:40 compute-0 python3.9[49697]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:26:40 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 01 20:26:40 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 01 20:26:40 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 01 20:26:40 compute-0 systemd[1]: Stopping Network Manager...
Dec 01 20:26:40 compute-0 NetworkManager[7247]: <info>  [1764620800.4648] caught SIGTERM, shutting down normally.
Dec 01 20:26:40 compute-0 NetworkManager[7247]: <info>  [1764620800.4661] dhcp4 (eth0): canceled DHCP transaction
Dec 01 20:26:40 compute-0 NetworkManager[7247]: <info>  [1764620800.4661] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 01 20:26:40 compute-0 NetworkManager[7247]: <info>  [1764620800.4661] dhcp4 (eth0): state changed no lease
Dec 01 20:26:40 compute-0 NetworkManager[7247]: <info>  [1764620800.4663] manager: NetworkManager state is now CONNECTED_SITE
Dec 01 20:26:40 compute-0 NetworkManager[7247]: <info>  [1764620800.4732] exiting (success)
Dec 01 20:26:40 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 01 20:26:40 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 01 20:26:40 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 01 20:26:40 compute-0 systemd[1]: Stopped Network Manager.
Dec 01 20:26:40 compute-0 systemd[1]: NetworkManager.service: Consumed 16.276s CPU time, 4.1M memory peak, read 0B from disk, written 18.0K to disk.
Dec 01 20:26:40 compute-0 systemd[1]: Starting Network Manager...
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.5553] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:59640efb-8f9f-4203-9412-dcffb4b15890)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.5555] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.5606] manager[0x564a079d7090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 01 20:26:40 compute-0 systemd[1]: Starting Hostname Service...
Dec 01 20:26:40 compute-0 systemd[1]: Started Hostname Service.
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6294] hostname: hostname: using hostnamed
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6294] hostname: static hostname changed from (none) to "compute-0"
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6298] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6302] manager[0x564a079d7090]: rfkill: Wi-Fi hardware radio set enabled
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6303] manager[0x564a079d7090]: rfkill: WWAN hardware radio set enabled
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6321] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6328] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6328] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6329] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6329] manager: Networking is enabled by state file
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6331] settings: Loaded settings plugin: keyfile (internal)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6334] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6355] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6362] dhcp: init: Using DHCP client 'internal'
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6364] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6368] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6373] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6379] device (lo): Activation: starting connection 'lo' (0db15d13-256d-400b-ba95-6470a1e83921)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6384] device (eth0): carrier: link connected
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6387] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6391] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6392] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6396] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6401] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6405] device (eth1): carrier: link connected
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6409] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6412] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f93e717b-0f4e-5511-a89b-ffe6fe3d8145) (indicated)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6413] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6416] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6420] device (eth1): Activation: starting connection 'ci-private-network' (f93e717b-0f4e-5511-a89b-ffe6fe3d8145)
Dec 01 20:26:40 compute-0 systemd[1]: Started Network Manager.
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6425] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6429] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6431] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6433] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6434] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6436] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6437] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6439] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6441] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6447] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6449] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6456] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6468] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6476] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6479] dhcp4 (eth0): state changed new lease, address=38.102.83.214
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6485] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6490] device (lo): Activation: successful, device activated.
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6500] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6562] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6567] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6574] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6577] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6581] device (eth1): Activation: successful, device activated.
Dec 01 20:26:40 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6590] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6593] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6597] manager: NetworkManager state is now CONNECTED_SITE
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6600] device (eth0): Activation: successful, device activated.
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6605] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 01 20:26:40 compute-0 NetworkManager[49710]: <info>  [1764620800.6609] manager: startup complete
Dec 01 20:26:40 compute-0 sudo[49695]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:40 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 01 20:26:41 compute-0 sudo[49921]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwmomjwipmyffpvaskymrbqpbkvwctgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620800.8289917-168-281040214361192/AnsiballZ_dnf.py'
Dec 01 20:26:41 compute-0 sudo[49921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:41 compute-0 python3.9[49923]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:26:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:26:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:26:48 compute-0 systemd[1]: Reloading.
Dec 01 20:26:48 compute-0 systemd-sysv-generator[49990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:26:48 compute-0 systemd-rc-local-generator[49987]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:26:48 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:26:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:26:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:26:49 compute-0 systemd[1]: run-r7c78bbbf11154df8b302c44db77cb24d.service: Deactivated successfully.
Dec 01 20:26:50 compute-0 sudo[49921]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:50 compute-0 sudo[50391]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqxuhvzzxgrhhoaxunoiucednzrypmeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620810.4347482-180-260516893782255/AnsiballZ_stat.py'
Dec 01 20:26:50 compute-0 sudo[50391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:50 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 01 20:26:50 compute-0 python3.9[50393]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:26:50 compute-0 sudo[50391]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:51 compute-0 sudo[50543]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pepucormitksszakknsncccnsqljbtxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620811.2288315-189-238220173814283/AnsiballZ_ini_file.py'
Dec 01 20:26:51 compute-0 sudo[50543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:51 compute-0 python3.9[50545]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:51 compute-0 sudo[50543]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:52 compute-0 sudo[50697]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafdqiflzsfbsfezpbkbxxepagfmulvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620812.211246-199-141860979231672/AnsiballZ_ini_file.py'
Dec 01 20:26:52 compute-0 sudo[50697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:52 compute-0 python3.9[50699]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:52 compute-0 sudo[50697]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:53 compute-0 sudo[50849]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qanmopyikeglsvcueqbnxbhnkkewlooy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620812.9711432-199-91534670849657/AnsiballZ_ini_file.py'
Dec 01 20:26:53 compute-0 sudo[50849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:53 compute-0 python3.9[50851]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:53 compute-0 sudo[50849]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:54 compute-0 sudo[51001]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhcnglobgkdcrynbnpmueuvmnnwptihz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620813.7143729-214-205791450037395/AnsiballZ_ini_file.py'
Dec 01 20:26:54 compute-0 sudo[51001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:54 compute-0 python3.9[51003]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:54 compute-0 sudo[51001]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:54 compute-0 sudo[51153]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlapjrabmjmycjwhngqvsymqzptaixoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620814.6797452-214-225404653738487/AnsiballZ_ini_file.py'
Dec 01 20:26:54 compute-0 sudo[51153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:55 compute-0 python3.9[51155]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:55 compute-0 sudo[51153]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:55 compute-0 sudo[51305]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsqllmnnjfwxkmjpnuocktgcilphdsqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620815.3509514-229-113006681669144/AnsiballZ_stat.py'
Dec 01 20:26:55 compute-0 sudo[51305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:55 compute-0 python3.9[51307]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:26:55 compute-0 sudo[51305]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:56 compute-0 sudo[51428]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swhkdillloujditisiwsfuijtsrcgfry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620815.3509514-229-113006681669144/AnsiballZ_copy.py'
Dec 01 20:26:56 compute-0 sudo[51428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:56 compute-0 python3.9[51430]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620815.3509514-229-113006681669144/.source _original_basename=.3io1bogz follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:56 compute-0 sudo[51428]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:57 compute-0 sudo[51580]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcxnbaalmtibkqjshwfxiyfvdxyzejff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620816.9828634-244-37254738549485/AnsiballZ_file.py'
Dec 01 20:26:57 compute-0 sudo[51580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:57 compute-0 python3.9[51582]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:57 compute-0 sudo[51580]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:58 compute-0 sudo[51732]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hccuzyvqjbzgsxfaodusmajdlowoalip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620817.7227569-252-277119493960021/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 01 20:26:58 compute-0 sudo[51732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:58 compute-0 python3.9[51734]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 01 20:26:58 compute-0 sudo[51732]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:58 compute-0 sudo[51884]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofuycosxwnpsnnmmqrfeolxdnepjfdgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620818.6225836-261-131766548066331/AnsiballZ_file.py'
Dec 01 20:26:58 compute-0 sudo[51884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:59 compute-0 python3.9[51886]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:26:59 compute-0 sudo[51884]: pam_unix(sudo:session): session closed for user root
Dec 01 20:26:59 compute-0 sudo[52036]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gorrvokrmhzkmabhxifjazrybicwfcqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620819.4509537-271-102382310464839/AnsiballZ_stat.py'
Dec 01 20:26:59 compute-0 sudo[52036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:26:59 compute-0 sudo[52036]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:00 compute-0 sudo[52160]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixtcazkinydillsunycekcvxklieusz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620819.4509537-271-102382310464839/AnsiballZ_copy.py'
Dec 01 20:27:00 compute-0 sudo[52160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:00 compute-0 sudo[52160]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:01 compute-0 sudo[52312]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jabtariblxioonklzgiathugnebwmaqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620820.7704651-286-194269919862669/AnsiballZ_slurp.py'
Dec 01 20:27:01 compute-0 sudo[52312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:01 compute-0 python3.9[52314]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 01 20:27:01 compute-0 sudo[52312]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:02 compute-0 sudo[52487]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssjgxdnysaswzpusebzltypaodykhvpx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620821.7815208-295-21900452330420/async_wrapper.py j317088210829 300 /home/zuul/.ansible/tmp/ansible-tmp-1764620821.7815208-295-21900452330420/AnsiballZ_edpm_os_net_config.py _'
Dec 01 20:27:02 compute-0 sudo[52487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:02 compute-0 ansible-async_wrapper.py[52489]: Invoked with j317088210829 300 /home/zuul/.ansible/tmp/ansible-tmp-1764620821.7815208-295-21900452330420/AnsiballZ_edpm_os_net_config.py _
Dec 01 20:27:02 compute-0 ansible-async_wrapper.py[52492]: Starting module and watcher
Dec 01 20:27:02 compute-0 ansible-async_wrapper.py[52492]: Start watching 52493 (300)
Dec 01 20:27:02 compute-0 ansible-async_wrapper.py[52493]: Start module (52493)
Dec 01 20:27:02 compute-0 ansible-async_wrapper.py[52489]: Return async_wrapper task started.
Dec 01 20:27:02 compute-0 sudo[52487]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:02 compute-0 python3.9[52494]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 01 20:27:03 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 01 20:27:03 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 01 20:27:03 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 01 20:27:03 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 01 20:27:03 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7199] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7222] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7768] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7771] audit: op="connection-add" uuid="5f0f1045-31dc-44b9-9d92-6d4d77f7be8b" name="br-ex-br" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7787] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7788] audit: op="connection-add" uuid="ff9c6401-38a3-481b-a419-2f0aaebc50e2" name="br-ex-port" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7801] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7803] audit: op="connection-add" uuid="d56b6b94-15c2-471e-aa6d-f2018dd68618" name="eth1-port" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7817] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7819] audit: op="connection-add" uuid="71b0f71c-8903-4d1b-9d74-b1dd9f4d11d5" name="vlan20-port" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7832] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7835] audit: op="connection-add" uuid="75559d99-499c-466d-a487-4795fa2b95bf" name="vlan21-port" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7848] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7850] audit: op="connection-add" uuid="067f4561-f82f-45f4-ad49-afdcea22d466" name="vlan22-port" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7861] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7863] audit: op="connection-add" uuid="d2c5773a-830d-457b-a9aa-c9c0a1a43ecc" name="vlan23-port" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7884] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7899] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7901] audit: op="connection-add" uuid="88d47ec8-497a-40eb-b28e-3a8261dc0a33" name="br-ex-if" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7953] audit: op="connection-update" uuid="f93e717b-0f4e-5511-a89b-ffe6fe3d8145" name="ci-private-network" args="connection.timestamp,connection.slave-type,connection.controller,connection.master,connection.port-type,ovs-external-ids.data,ipv4.dns,ipv4.addresses,ipv4.never-default,ipv4.routes,ipv4.method,ipv4.routing-rules,ovs-interface.type,ipv6.dns,ipv6.addresses,ipv6.routes,ipv6.addr-gen-mode,ipv6.method,ipv6.routing-rules" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7984] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.7989] audit: op="connection-add" uuid="08b1094a-8a50-4514-9c0c-d88940573e49" name="vlan20-if" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8014] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8016] audit: op="connection-add" uuid="d1769230-7522-420a-9e72-2f24bffa8eaf" name="vlan21-if" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8030] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8032] audit: op="connection-add" uuid="fbfb213e-1790-4b7c-8ae9-b8278fc80a19" name="vlan22-if" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8049] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8051] audit: op="connection-add" uuid="05e719da-bf5c-4546-9b80-27885bc00e63" name="vlan23-if" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8062] audit: op="connection-delete" uuid="7e823da8-b6f6-3eaa-b777-0c51f4576aa2" name="Wired connection 1" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8073] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8083] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8086] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (5f0f1045-31dc-44b9-9d92-6d4d77f7be8b)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8088] audit: op="connection-activate" uuid="5f0f1045-31dc-44b9-9d92-6d4d77f7be8b" name="br-ex-br" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8089] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8095] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8099] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ff9c6401-38a3-481b-a419-2f0aaebc50e2)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8101] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8106] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8110] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d56b6b94-15c2-471e-aa6d-f2018dd68618)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8112] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8118] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8123] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (71b0f71c-8903-4d1b-9d74-b1dd9f4d11d5)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8125] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8130] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8135] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (75559d99-499c-466d-a487-4795fa2b95bf)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8137] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8142] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8146] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (067f4561-f82f-45f4-ad49-afdcea22d466)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8148] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8154] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8158] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d2c5773a-830d-457b-a9aa-c9c0a1a43ecc)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8159] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8162] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8164] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8169] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8174] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8178] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (88d47ec8-497a-40eb-b28e-3a8261dc0a33)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8179] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8182] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8184] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8185] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8187] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8197] device (eth1): disconnecting for new activation request.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8198] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8201] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8203] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8205] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8208] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8212] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8216] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (08b1094a-8a50-4514-9c0c-d88940573e49)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8217] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8220] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8222] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8224] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8227] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8231] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8235] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d1769230-7522-420a-9e72-2f24bffa8eaf)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8236] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8239] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8241] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8243] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8246] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8250] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8254] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (fbfb213e-1790-4b7c-8ae9-b8278fc80a19)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8255] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8258] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8260] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8262] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8265] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8269] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8274] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (05e719da-bf5c-4546-9b80-27885bc00e63)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8275] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8278] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8279] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8281] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8283] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8295] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8297] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8300] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8302] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8309] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8313] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8317] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8331] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8334] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8339] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8344] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8349] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8352] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 systemd-udevd[52500]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 20:27:04 compute-0 kernel: Timeout policy base is empty
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8356] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8367] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8373] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8375] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8379] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8383] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8386] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8388] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8396] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8399] dhcp4 (eth0): canceled DHCP transaction
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8399] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8399] dhcp4 (eth0): state changed no lease
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8400] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8410] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8413] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52495 uid=0 result="fail" reason="Device is not activated"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8442] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8449] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8459] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8464] dhcp4 (eth0): state changed new lease, address=38.102.83.214
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8467] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8470] device (eth1): disconnecting for new activation request.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8471] audit: op="connection-activate" uuid="f93e717b-0f4e-5511-a89b-ffe6fe3d8145" name="ci-private-network" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8536] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52495 uid=0 result="success"
Dec 01 20:27:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8558] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 01 20:27:04 compute-0 kernel: br-ex: entered promiscuous mode
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8783] device (eth1): Activation: starting connection 'ci-private-network' (f93e717b-0f4e-5511-a89b-ffe6fe3d8145)
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8787] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8794] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8798] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8803] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8807] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8814] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8815] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8816] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8818] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8819] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8820] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8831] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8837] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8840] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8843] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8847] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8850] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8853] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8856] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8860] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8863] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8866] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8870] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 kernel: vlan22: entered promiscuous mode
Dec 01 20:27:04 compute-0 systemd-udevd[52499]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8874] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8899] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8904] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 kernel: vlan23: entered promiscuous mode
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8954] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8958] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 systemd-udevd[52501]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8962] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8966] device (eth1): Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.8983] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9023] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 01 20:27:04 compute-0 kernel: vlan21: entered promiscuous mode
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9027] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9032] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9037] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9062] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 kernel: vlan20: entered promiscuous mode
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9113] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9127] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9130] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9137] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9142] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9178] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 01 20:27:04 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9194] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9235] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9236] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9241] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9249] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9250] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9255] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9269] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9279] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9311] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9312] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 01 20:27:04 compute-0 NetworkManager[49710]: <info>  [1764620824.9317] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.0610] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.3035] checkpoint[0x564a079ad950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.3037] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 sudo[52852]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppwsifyheljixyfksivutvregqikcdfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620825.8540883-295-20840826998441/AnsiballZ_async_status.py'
Dec 01 20:27:06 compute-0 sudo[52852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.5427] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.5439] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 python3.9[52854]: ansible-ansible.legacy.async_status Invoked with jid=j317088210829.52489 mode=status _async_dir=/root/.ansible_async
Dec 01 20:27:06 compute-0 sudo[52852]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.7168] audit: op="networking-control" arg="global-dns-configuration" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.7203] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.7226] audit: op="networking-control" arg="global-dns-configuration" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.7243] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.8785] checkpoint[0x564a079ada20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 01 20:27:06 compute-0 NetworkManager[49710]: <info>  [1764620826.8790] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52495 uid=0 result="success"
Dec 01 20:27:06 compute-0 ansible-async_wrapper.py[52493]: Module complete (52493)
Dec 01 20:27:07 compute-0 ansible-async_wrapper.py[52492]: Done in kid B.
Dec 01 20:27:09 compute-0 sudo[52957]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgellbpufydjouavdxeropwswacmpmwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620825.8540883-295-20840826998441/AnsiballZ_async_status.py'
Dec 01 20:27:09 compute-0 sudo[52957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:10 compute-0 python3.9[52959]: ansible-ansible.legacy.async_status Invoked with jid=j317088210829.52489 mode=status _async_dir=/root/.ansible_async
Dec 01 20:27:10 compute-0 sudo[52957]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:10 compute-0 sudo[53057]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhwmhiizdjlgwdgvawcyhmcitfsdkunf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620825.8540883-295-20840826998441/AnsiballZ_async_status.py'
Dec 01 20:27:10 compute-0 sudo[53057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:10 compute-0 python3.9[53059]: ansible-ansible.legacy.async_status Invoked with jid=j317088210829.52489 mode=cleanup _async_dir=/root/.ansible_async
Dec 01 20:27:10 compute-0 sudo[53057]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:10 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 01 20:27:11 compute-0 sudo[53211]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihcqaxltdbuvnpoxrbslilmmedcqgen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620830.8728614-322-251747666371834/AnsiballZ_stat.py'
Dec 01 20:27:11 compute-0 sudo[53211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:11 compute-0 python3.9[53213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:27:11 compute-0 sudo[53211]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:11 compute-0 sudo[53334]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrseumxfyesjjhroqefzzyprwotroaqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620830.8728614-322-251747666371834/AnsiballZ_copy.py'
Dec 01 20:27:11 compute-0 sudo[53334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:11 compute-0 python3.9[53336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620830.8728614-322-251747666371834/.source.returncode _original_basename=.0duiz77y follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:27:11 compute-0 sudo[53334]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:12 compute-0 sudo[53486]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmrqkyznfknazfjfowlgxedqsuvqswrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620832.1427662-338-136551986100138/AnsiballZ_stat.py'
Dec 01 20:27:12 compute-0 sudo[53486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:12 compute-0 python3.9[53488]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:27:12 compute-0 sudo[53486]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:12 compute-0 sudo[53609]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikaaljqzabqicrhsyihiwwgeddoozcaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620832.1427662-338-136551986100138/AnsiballZ_copy.py'
Dec 01 20:27:12 compute-0 sudo[53609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:13 compute-0 python3.9[53611]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620832.1427662-338-136551986100138/.source.cfg _original_basename=.c1lm52yu follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:27:13 compute-0 sudo[53609]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:13 compute-0 sudo[53762]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxotzacswseahwujzlrtcgyozhjzzfcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620833.2579675-353-37271059369617/AnsiballZ_systemd.py'
Dec 01 20:27:13 compute-0 sudo[53762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:13 compute-0 python3.9[53764]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:27:13 compute-0 systemd[1]: Reloading Network Manager...
Dec 01 20:27:14 compute-0 NetworkManager[49710]: <info>  [1764620834.0081] audit: op="reload" arg="0" pid=53768 uid=0 result="success"
Dec 01 20:27:14 compute-0 NetworkManager[49710]: <info>  [1764620834.0087] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 01 20:27:14 compute-0 systemd[1]: Reloaded Network Manager.
Dec 01 20:27:14 compute-0 sudo[53762]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:14 compute-0 sshd-session[45714]: Connection closed by 192.168.122.30 port 55730
Dec 01 20:27:14 compute-0 sshd-session[45711]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:27:14 compute-0 systemd-logind[796]: Session 11 logged out. Waiting for processes to exit.
Dec 01 20:27:14 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 01 20:27:14 compute-0 systemd[1]: session-11.scope: Consumed 51.164s CPU time.
Dec 01 20:27:14 compute-0 systemd-logind[796]: Removed session 11.
Dec 01 20:27:19 compute-0 sshd-session[53799]: Accepted publickey for zuul from 192.168.122.30 port 43500 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:27:19 compute-0 systemd-logind[796]: New session 12 of user zuul.
Dec 01 20:27:19 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 01 20:27:19 compute-0 sshd-session[53799]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:27:20 compute-0 python3.9[53952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:27:21 compute-0 python3.9[54106]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:27:23 compute-0 python3.9[54300]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:27:23 compute-0 sshd-session[53802]: Connection closed by 192.168.122.30 port 43500
Dec 01 20:27:23 compute-0 sshd-session[53799]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:27:23 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 01 20:27:23 compute-0 systemd[1]: session-12.scope: Consumed 2.519s CPU time.
Dec 01 20:27:23 compute-0 systemd-logind[796]: Session 12 logged out. Waiting for processes to exit.
Dec 01 20:27:23 compute-0 systemd-logind[796]: Removed session 12.
Dec 01 20:27:24 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 01 20:27:29 compute-0 sshd-session[54328]: Accepted publickey for zuul from 192.168.122.30 port 43516 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:27:29 compute-0 systemd-logind[796]: New session 13 of user zuul.
Dec 01 20:27:29 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 01 20:27:29 compute-0 sshd-session[54328]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:27:30 compute-0 python3.9[54482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:27:31 compute-0 python3.9[54636]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:27:31 compute-0 sudo[54790]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tysciphmrjrzycrnqzmarbmqtgehfbco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620851.4204485-40-160708895140900/AnsiballZ_setup.py'
Dec 01 20:27:31 compute-0 sudo[54790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:31 compute-0 python3.9[54792]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:27:32 compute-0 sudo[54790]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:32 compute-0 sudo[54875]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okoduqplefrmwvukhscmjzvxaeukuinj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620851.4204485-40-160708895140900/AnsiballZ_dnf.py'
Dec 01 20:27:32 compute-0 sudo[54875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:32 compute-0 python3.9[54877]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:27:34 compute-0 sudo[54875]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:34 compute-0 sudo[55028]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfzkbiccndhijsmifuckpmztkdopywqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620854.3873444-52-255995433909010/AnsiballZ_setup.py'
Dec 01 20:27:34 compute-0 sudo[55028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:34 compute-0 python3.9[55030]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:27:35 compute-0 sudo[55028]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:35 compute-0 sudo[55224]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzllydilnvbdmuqiyttrzmugbfxwtdre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620855.5325146-63-87171649658791/AnsiballZ_file.py'
Dec 01 20:27:35 compute-0 sudo[55224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:36 compute-0 python3.9[55226]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:27:36 compute-0 sudo[55224]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:36 compute-0 sudo[55376]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmkfnqgbmbqeqxgwvmcuowylbpjltdcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620856.3834405-71-35069986091042/AnsiballZ_command.py'
Dec 01 20:27:36 compute-0 sudo[55376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:37 compute-0 python3.9[55378]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3538363748-merged.mount: Deactivated successfully.
Dec 01 20:27:37 compute-0 podman[55379]: 2025-12-01 20:27:37.104059908 +0000 UTC m=+0.056090808 system refresh
Dec 01 20:27:37 compute-0 sudo[55376]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:37 compute-0 sudo[55538]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tehjazfzyhpqxsisointcrjzacoxrbkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620857.3248734-79-61817078052562/AnsiballZ_stat.py'
Dec 01 20:27:37 compute-0 sudo[55538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:37 compute-0 python3.9[55540]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:27:37 compute-0 sudo[55538]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:27:38 compute-0 sudo[55661]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iebcthxngoalsyjwfnslewxbcxyprgwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620857.3248734-79-61817078052562/AnsiballZ_copy.py'
Dec 01 20:27:38 compute-0 sudo[55661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:38 compute-0 python3.9[55663]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620857.3248734-79-61817078052562/.source.json follow=False _original_basename=podman_network_config.j2 checksum=f22732830790a88e71c76028c43f3363f4b3be56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:27:38 compute-0 sudo[55661]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:39 compute-0 sudo[55813]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxnkqxhpoujoiwznlgwtfnytfqxpmzgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620858.8527513-94-18670998739585/AnsiballZ_stat.py'
Dec 01 20:27:39 compute-0 sudo[55813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:39 compute-0 python3.9[55815]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:27:39 compute-0 sudo[55813]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:39 compute-0 sudo[55936]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qznayhzwxlkrzobkjavhoflxoazxubrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620858.8527513-94-18670998739585/AnsiballZ_copy.py'
Dec 01 20:27:39 compute-0 sudo[55936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:39 compute-0 python3.9[55938]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764620858.8527513-94-18670998739585/.source.conf follow=False _original_basename=registries.conf.j2 checksum=1f3eae670902d81b6898b401f0bbba899d0240bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:27:39 compute-0 sudo[55936]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:40 compute-0 sudo[56088]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlzlftpqqbukuvkomlcaqbwbwylkcmyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620860.0763373-110-201618725929483/AnsiballZ_ini_file.py'
Dec 01 20:27:40 compute-0 sudo[56088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:40 compute-0 python3.9[56090]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:27:40 compute-0 sudo[56088]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:41 compute-0 sudo[56240]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnaxktdmlnqhitbkyzdtbmfghgofqrob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620860.9207928-110-200139185331613/AnsiballZ_ini_file.py'
Dec 01 20:27:41 compute-0 sudo[56240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:41 compute-0 python3.9[56242]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:27:41 compute-0 sudo[56240]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:41 compute-0 sudo[56392]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqwrmficzmxuigijtfogzodlaoryvtjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620861.6344702-110-54197550369507/AnsiballZ_ini_file.py'
Dec 01 20:27:41 compute-0 sudo[56392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:42 compute-0 python3.9[56394]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:27:42 compute-0 sudo[56392]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:42 compute-0 sudo[56544]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezuonyvuuorbzalolufbbmsfmwihxqio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620862.2146013-110-187991984749224/AnsiballZ_ini_file.py'
Dec 01 20:27:42 compute-0 sudo[56544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:42 compute-0 python3.9[56546]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:27:42 compute-0 sudo[56544]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:43 compute-0 sudo[56696]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnnrtlmbydfgmhgzsltdqkqvyhhzwrvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620862.8814323-141-64118915393210/AnsiballZ_dnf.py'
Dec 01 20:27:43 compute-0 sudo[56696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:43 compute-0 python3.9[56698]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:27:44 compute-0 sudo[56696]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:45 compute-0 sudo[56849]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqoqtsglhfbxynavrmxmsfvetvwbieqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620865.0862255-152-278843525336042/AnsiballZ_setup.py'
Dec 01 20:27:45 compute-0 sudo[56849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:45 compute-0 python3.9[56851]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:27:45 compute-0 sudo[56849]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:46 compute-0 sudo[57003]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myzqgiitoktyjbexvzepiqzxyzmwvrjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620865.8284307-160-193264332763800/AnsiballZ_stat.py'
Dec 01 20:27:46 compute-0 sudo[57003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:46 compute-0 python3.9[57005]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:27:46 compute-0 sudo[57003]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:46 compute-0 sudo[57155]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpuvlcplfbwcxqlovcncavtjoobglrvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620866.5416856-169-245702359379053/AnsiballZ_stat.py'
Dec 01 20:27:46 compute-0 sudo[57155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:47 compute-0 python3.9[57157]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:27:47 compute-0 sudo[57155]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:47 compute-0 sudo[57307]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twpoaipupgxgtftwhzbecamdshpwozja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620867.3190863-179-85137523369863/AnsiballZ_command.py'
Dec 01 20:27:47 compute-0 sudo[57307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:47 compute-0 python3.9[57309]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:27:47 compute-0 sudo[57307]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:48 compute-0 sudo[57460]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzefoumvrtcuwbwlbszofzvcmnmplflm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620868.0657976-189-153247389757141/AnsiballZ_service_facts.py'
Dec 01 20:27:48 compute-0 sudo[57460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:48 compute-0 python3.9[57462]: ansible-service_facts Invoked
Dec 01 20:27:48 compute-0 network[57479]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:27:48 compute-0 network[57480]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:27:48 compute-0 network[57481]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:27:52 compute-0 sudo[57460]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:53 compute-0 sudo[57764]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjghtzrvoeadsmzxzextzpvfnnwwyozp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764620873.5293741-204-205649198242425/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764620873.5293741-204-205649198242425/args'
Dec 01 20:27:53 compute-0 sudo[57764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:53 compute-0 sudo[57764]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:54 compute-0 sudo[57931]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afkqzhellxnavrozvfyrsfspnqwvmfoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620874.2861085-215-240263283656605/AnsiballZ_dnf.py'
Dec 01 20:27:54 compute-0 sudo[57931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:54 compute-0 python3.9[57933]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:27:56 compute-0 sudo[57931]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:57 compute-0 sudo[58084]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giwjiuplqyqonairmrgmqafgpyaoxbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620876.6997504-228-237949070681784/AnsiballZ_package_facts.py'
Dec 01 20:27:57 compute-0 sudo[58084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:57 compute-0 python3.9[58086]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 01 20:27:58 compute-0 sudo[58084]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:59 compute-0 sudo[58236]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tznaidwhljktbzzsmmqdkobalxscqhyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620878.5833833-238-44539943197174/AnsiballZ_stat.py'
Dec 01 20:27:59 compute-0 sudo[58236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:59 compute-0 python3.9[58238]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:27:59 compute-0 sudo[58236]: pam_unix(sudo:session): session closed for user root
Dec 01 20:27:59 compute-0 sudo[58361]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyfqaluiypuduwergdpmwfumtgpqxyit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620878.5833833-238-44539943197174/AnsiballZ_copy.py'
Dec 01 20:27:59 compute-0 sudo[58361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:27:59 compute-0 python3.9[58363]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620878.5833833-238-44539943197174/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:27:59 compute-0 sudo[58361]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:00 compute-0 sudo[58515]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzmuwbizlvqshbnefziprvixmedgjziw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620880.1482677-253-179470691583301/AnsiballZ_stat.py'
Dec 01 20:28:00 compute-0 sudo[58515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:00 compute-0 python3.9[58517]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:00 compute-0 sudo[58515]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:01 compute-0 sudo[58640]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coxzuzbmlogvlbymrkvjflvxwmmwjexx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620880.1482677-253-179470691583301/AnsiballZ_copy.py'
Dec 01 20:28:01 compute-0 sudo[58640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:01 compute-0 python3.9[58642]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620880.1482677-253-179470691583301/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:01 compute-0 sudo[58640]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:02 compute-0 sudo[58794]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ailmhwtuxhhtsqcivitxecpwrtmgviek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620881.9869819-274-172103102624852/AnsiballZ_lineinfile.py'
Dec 01 20:28:02 compute-0 sudo[58794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:02 compute-0 python3.9[58796]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:02 compute-0 sudo[58794]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:03 compute-0 sudo[58948]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxrhcudrzbkhanbhsdeayslhzzfsjldt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620883.3415318-289-68712209009327/AnsiballZ_setup.py'
Dec 01 20:28:03 compute-0 sudo[58948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:04 compute-0 python3.9[58950]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:28:04 compute-0 sudo[58948]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:04 compute-0 sudo[59032]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-morvretiticziefeebyfwqrkpyvfmizf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620883.3415318-289-68712209009327/AnsiballZ_systemd.py'
Dec 01 20:28:04 compute-0 sudo[59032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:05 compute-0 python3.9[59034]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:28:05 compute-0 sudo[59032]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:05 compute-0 sudo[59186]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxkkquhphmfpslurycthjagemamlknuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620885.6881382-305-150412609008558/AnsiballZ_setup.py'
Dec 01 20:28:05 compute-0 sudo[59186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:06 compute-0 python3.9[59188]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:28:06 compute-0 sudo[59186]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:06 compute-0 sudo[59270]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emcklrjsejjwwyzaqhsuhbgcdefpvrzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620885.6881382-305-150412609008558/AnsiballZ_systemd.py'
Dec 01 20:28:06 compute-0 sudo[59270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:07 compute-0 python3.9[59272]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:28:07 compute-0 chronyd[798]: chronyd exiting
Dec 01 20:28:07 compute-0 systemd[1]: Stopping NTP client/server...
Dec 01 20:28:07 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 01 20:28:07 compute-0 systemd[1]: Stopped NTP client/server.
Dec 01 20:28:07 compute-0 systemd[1]: Starting NTP client/server...
Dec 01 20:28:07 compute-0 chronyd[59280]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 01 20:28:07 compute-0 chronyd[59280]: Frequency -31.517 +/- 0.149 ppm read from /var/lib/chrony/drift
Dec 01 20:28:07 compute-0 chronyd[59280]: Loaded seccomp filter (level 2)
Dec 01 20:28:07 compute-0 systemd[1]: Started NTP client/server.
Dec 01 20:28:07 compute-0 sudo[59270]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:07 compute-0 sshd-session[54331]: Connection closed by 192.168.122.30 port 43516
Dec 01 20:28:07 compute-0 sshd-session[54328]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:28:07 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 01 20:28:07 compute-0 systemd[1]: session-13.scope: Consumed 27.037s CPU time.
Dec 01 20:28:07 compute-0 systemd-logind[796]: Session 13 logged out. Waiting for processes to exit.
Dec 01 20:28:07 compute-0 systemd-logind[796]: Removed session 13.
Dec 01 20:28:13 compute-0 sshd-session[59306]: Accepted publickey for zuul from 192.168.122.30 port 53336 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:28:13 compute-0 systemd-logind[796]: New session 14 of user zuul.
Dec 01 20:28:13 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 01 20:28:13 compute-0 sshd-session[59306]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:28:14 compute-0 sudo[59459]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlcjqkctftopcmzuulsejdzvdavoeucq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620893.7582633-22-143930379508438/AnsiballZ_file.py'
Dec 01 20:28:14 compute-0 sudo[59459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:14 compute-0 python3.9[59461]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:14 compute-0 sudo[59459]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:15 compute-0 sudo[59611]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqmtswonudhoofzdwntrmtwpcdchnthl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620894.821407-34-230270345035955/AnsiballZ_stat.py'
Dec 01 20:28:15 compute-0 sudo[59611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:15 compute-0 python3.9[59613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:15 compute-0 sudo[59611]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:16 compute-0 sudo[59734]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdmimvqhjesohjgwlzyqcpskvnidgyjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620894.821407-34-230270345035955/AnsiballZ_copy.py'
Dec 01 20:28:16 compute-0 sudo[59734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:16 compute-0 python3.9[59736]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620894.821407-34-230270345035955/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:16 compute-0 sudo[59734]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:16 compute-0 sshd-session[59309]: Connection closed by 192.168.122.30 port 53336
Dec 01 20:28:16 compute-0 sshd-session[59306]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:28:16 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 01 20:28:16 compute-0 systemd[1]: session-14.scope: Consumed 1.908s CPU time.
Dec 01 20:28:16 compute-0 systemd-logind[796]: Session 14 logged out. Waiting for processes to exit.
Dec 01 20:28:16 compute-0 systemd-logind[796]: Removed session 14.
Dec 01 20:28:22 compute-0 sshd-session[59761]: Accepted publickey for zuul from 192.168.122.30 port 53520 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:28:22 compute-0 systemd-logind[796]: New session 15 of user zuul.
Dec 01 20:28:22 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 01 20:28:22 compute-0 sshd-session[59761]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:28:23 compute-0 python3.9[59914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:28:24 compute-0 sudo[60068]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbpxjdjqwznbmxtjkbikfxxorrtlshpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620904.2895195-33-128483249579658/AnsiballZ_file.py'
Dec 01 20:28:24 compute-0 sudo[60068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:24 compute-0 python3.9[60070]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:24 compute-0 sudo[60068]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:25 compute-0 sudo[60243]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmwcujebgndemhjvzfzvzkzqbziyxst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620905.1367528-41-65121708510558/AnsiballZ_stat.py'
Dec 01 20:28:25 compute-0 sudo[60243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:25 compute-0 python3.9[60245]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:25 compute-0 sudo[60243]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:26 compute-0 sudo[60366]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klmeryxfpnawnpinbumqojuobzejtahj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620905.1367528-41-65121708510558/AnsiballZ_copy.py'
Dec 01 20:28:26 compute-0 sudo[60366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:26 compute-0 python3.9[60368]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764620905.1367528-41-65121708510558/.source.json _original_basename=.cx79wnss follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:26 compute-0 sudo[60366]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:27 compute-0 sudo[60518]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdvhyeatlxcqklyaixjtiyjaxnbyutqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620906.9253156-64-30944147033004/AnsiballZ_stat.py'
Dec 01 20:28:27 compute-0 sudo[60518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:27 compute-0 python3.9[60520]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:27 compute-0 sudo[60518]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:27 compute-0 sudo[60641]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sryypefobngbnjuhqepsrdadxegtrupt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620906.9253156-64-30944147033004/AnsiballZ_copy.py'
Dec 01 20:28:27 compute-0 sudo[60641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:27 compute-0 python3.9[60643]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620906.9253156-64-30944147033004/.source _original_basename=.wack4tz8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:27 compute-0 sudo[60641]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:28 compute-0 sudo[60793]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipsabdzwrulmqlvhmbvnuyyllibkgvsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620908.1663806-80-35046606205813/AnsiballZ_file.py'
Dec 01 20:28:28 compute-0 sudo[60793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:28 compute-0 python3.9[60795]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:28:28 compute-0 sudo[60793]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:29 compute-0 sudo[60945]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmnoigvqbbtuvinuweirwllnlnbqsus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620908.9494233-88-37609069982802/AnsiballZ_stat.py'
Dec 01 20:28:29 compute-0 sudo[60945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:29 compute-0 python3.9[60947]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:29 compute-0 sudo[60945]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:29 compute-0 sudo[61068]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xahkipfqwpmkfabtpjiehevplqflnpzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620908.9494233-88-37609069982802/AnsiballZ_copy.py'
Dec 01 20:28:29 compute-0 sudo[61068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:30 compute-0 python3.9[61070]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764620908.9494233-88-37609069982802/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:28:30 compute-0 sudo[61068]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:30 compute-0 sudo[61220]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gposusvuzuepbfxdaypfkxrecoqogegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620910.1603694-88-253101439247079/AnsiballZ_stat.py'
Dec 01 20:28:30 compute-0 sudo[61220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:30 compute-0 python3.9[61222]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:30 compute-0 sudo[61220]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:31 compute-0 sudo[61343]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sabvpnklcvsyasstcrkqppoqtqfnmxpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620910.1603694-88-253101439247079/AnsiballZ_copy.py'
Dec 01 20:28:31 compute-0 sudo[61343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:31 compute-0 python3.9[61345]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764620910.1603694-88-253101439247079/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:28:31 compute-0 sudo[61343]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:31 compute-0 sudo[61495]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkaybgtoztztwewwivkvpmwcjnzbnfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620911.6303334-117-237062371336038/AnsiballZ_file.py'
Dec 01 20:28:31 compute-0 sudo[61495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:32 compute-0 python3.9[61497]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:32 compute-0 sudo[61495]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:32 compute-0 sudo[61647]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwgrwrtbnymeuiloojujwgjpbieprcpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620912.3803244-125-240568501757553/AnsiballZ_stat.py'
Dec 01 20:28:32 compute-0 sudo[61647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:32 compute-0 python3.9[61649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:32 compute-0 sudo[61647]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:33 compute-0 sudo[61770]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhednkqixyehwkqztztljwctegvyojrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620912.3803244-125-240568501757553/AnsiballZ_copy.py'
Dec 01 20:28:33 compute-0 sudo[61770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:33 compute-0 python3.9[61772]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620912.3803244-125-240568501757553/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:33 compute-0 sudo[61770]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:34 compute-0 sudo[61922]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ascshyvigauxziublzhnklmoclwhbuvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620913.7604444-140-75363167926264/AnsiballZ_stat.py'
Dec 01 20:28:34 compute-0 sudo[61922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:34 compute-0 python3.9[61924]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:34 compute-0 sudo[61922]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:34 compute-0 sudo[62045]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkeadzogkhynzaidvtdrvkcqguzcokxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620913.7604444-140-75363167926264/AnsiballZ_copy.py'
Dec 01 20:28:34 compute-0 sudo[62045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:34 compute-0 python3.9[62047]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620913.7604444-140-75363167926264/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:34 compute-0 sudo[62045]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:35 compute-0 sudo[62197]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmednqpkzeserlrubxynepuafpvhwyyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620915.1490245-155-191687335489683/AnsiballZ_systemd.py'
Dec 01 20:28:35 compute-0 sudo[62197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:36 compute-0 python3.9[62199]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:28:36 compute-0 systemd[1]: Reloading.
Dec 01 20:28:36 compute-0 systemd-rc-local-generator[62227]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:28:36 compute-0 systemd-sysv-generator[62230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:28:36 compute-0 systemd[1]: Reloading.
Dec 01 20:28:36 compute-0 systemd-sysv-generator[62263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:28:36 compute-0 systemd-rc-local-generator[62260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:28:36 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 01 20:28:36 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 01 20:28:36 compute-0 sudo[62197]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:37 compute-0 sudo[62425]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrjscihukmajtxcjadhuorgaqgkfcrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620916.904437-163-131067644737701/AnsiballZ_stat.py'
Dec 01 20:28:37 compute-0 sudo[62425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:37 compute-0 python3.9[62427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:37 compute-0 sudo[62425]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:37 compute-0 sudo[62548]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jztnzwngwocibwbdneboicpwlzjlqats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620916.904437-163-131067644737701/AnsiballZ_copy.py'
Dec 01 20:28:37 compute-0 sudo[62548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:38 compute-0 python3.9[62550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620916.904437-163-131067644737701/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:38 compute-0 sudo[62548]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:38 compute-0 sudo[62700]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlkqzvbqmkleuxfroktarztnrtamaixu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620918.3041558-178-217942278662130/AnsiballZ_stat.py'
Dec 01 20:28:38 compute-0 sudo[62700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:38 compute-0 python3.9[62702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:38 compute-0 sudo[62700]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:39 compute-0 sudo[62823]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgbxtisbcqrvzpdkyzsdaonshmscjrgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620918.3041558-178-217942278662130/AnsiballZ_copy.py'
Dec 01 20:28:39 compute-0 sudo[62823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:39 compute-0 python3.9[62825]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620918.3041558-178-217942278662130/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:39 compute-0 sudo[62823]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:40 compute-0 sudo[62975]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tossnajajhuxjqacqfsxjaggjwhtbvru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620919.698318-193-225516823219223/AnsiballZ_systemd.py'
Dec 01 20:28:40 compute-0 sudo[62975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:40 compute-0 python3.9[62977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:28:40 compute-0 systemd[1]: Reloading.
Dec 01 20:28:40 compute-0 systemd-rc-local-generator[63005]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:28:40 compute-0 systemd-sysv-generator[63009]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:28:40 compute-0 systemd[1]: Reloading.
Dec 01 20:28:40 compute-0 systemd-sysv-generator[63044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:28:40 compute-0 systemd-rc-local-generator[63040]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:28:40 compute-0 systemd[1]: Starting Create netns directory...
Dec 01 20:28:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 01 20:28:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 01 20:28:40 compute-0 systemd[1]: Finished Create netns directory.
Dec 01 20:28:40 compute-0 sudo[62975]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:41 compute-0 python3.9[63202]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:28:41 compute-0 network[63219]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:28:41 compute-0 network[63220]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:28:41 compute-0 network[63221]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:28:46 compute-0 sudo[63481]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwktggrlnqbxvcnnbiheskjlfuaforum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620926.5799627-209-279445265415005/AnsiballZ_systemd.py'
Dec 01 20:28:46 compute-0 sudo[63481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:47 compute-0 python3.9[63483]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:28:48 compute-0 systemd[1]: Reloading.
Dec 01 20:28:48 compute-0 systemd-rc-local-generator[63511]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:28:48 compute-0 systemd-sysv-generator[63516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:28:48 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 01 20:28:48 compute-0 iptables.init[63523]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 01 20:28:48 compute-0 iptables.init[63523]: iptables: Flushing firewall rules: [  OK  ]
Dec 01 20:28:48 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 01 20:28:48 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 01 20:28:49 compute-0 sudo[63481]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:49 compute-0 sudo[63718]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clewfcorbffzbyilebcwncroywzuintt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620929.2217894-209-145476983444605/AnsiballZ_systemd.py'
Dec 01 20:28:49 compute-0 sudo[63718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:49 compute-0 python3.9[63720]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:28:49 compute-0 sudo[63718]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:50 compute-0 sudo[63872]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cksvrnfywekppaycbledlygxhqfxitye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620930.3168223-225-250173621267062/AnsiballZ_systemd.py'
Dec 01 20:28:50 compute-0 sudo[63872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:50 compute-0 python3.9[63874]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:28:51 compute-0 systemd[1]: Reloading.
Dec 01 20:28:51 compute-0 systemd-rc-local-generator[63900]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:28:51 compute-0 systemd-sysv-generator[63906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:28:51 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 01 20:28:51 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 01 20:28:51 compute-0 sudo[63872]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:52 compute-0 sudo[64065]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ketlidkhoifyntbjojyztnrscddzhwjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620931.6768906-233-13978400929891/AnsiballZ_command.py'
Dec 01 20:28:52 compute-0 sudo[64065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:52 compute-0 python3.9[64067]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:28:52 compute-0 sudo[64065]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:53 compute-0 sudo[64218]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjlybqtenrngbjnvsdeqkvollzhmqaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620932.9889607-247-250760617121378/AnsiballZ_stat.py'
Dec 01 20:28:53 compute-0 sudo[64218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:53 compute-0 python3.9[64220]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:53 compute-0 sudo[64218]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:54 compute-0 sudo[64343]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsnfmezgqqubydgazfuowhaejcpbqnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620932.9889607-247-250760617121378/AnsiballZ_copy.py'
Dec 01 20:28:54 compute-0 sudo[64343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:54 compute-0 python3.9[64345]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620932.9889607-247-250760617121378/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:54 compute-0 sudo[64343]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:55 compute-0 sudo[64496]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnagfuhtzatmduhvkfrboyqqzosbrbzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620934.645517-262-124680750753411/AnsiballZ_systemd.py'
Dec 01 20:28:55 compute-0 sudo[64496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:55 compute-0 python3.9[64498]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:28:55 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 01 20:28:55 compute-0 sshd[1007]: Received SIGHUP; restarting.
Dec 01 20:28:55 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Dec 01 20:28:55 compute-0 sshd[1007]: Server listening on :: port 22.
Dec 01 20:28:55 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 01 20:28:55 compute-0 sudo[64496]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:56 compute-0 sudo[64652]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfblcwenekrfrulfwufchgkqnrixvrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620935.6590614-270-211458308828951/AnsiballZ_file.py'
Dec 01 20:28:56 compute-0 sudo[64652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:56 compute-0 python3.9[64654]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:56 compute-0 sudo[64652]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:56 compute-0 sudo[64804]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfsqsqopbyhbtzwbnukdhfuasnmuotsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620936.4183903-278-264485151348303/AnsiballZ_stat.py'
Dec 01 20:28:56 compute-0 sudo[64804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:57 compute-0 python3.9[64806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:28:57 compute-0 sudo[64804]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:57 compute-0 sudo[64927]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywufohuwaiypqkxgvcwtvpddljbyvwkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620936.4183903-278-264485151348303/AnsiballZ_copy.py'
Dec 01 20:28:57 compute-0 sudo[64927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:57 compute-0 python3.9[64929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620936.4183903-278-264485151348303/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:28:57 compute-0 sudo[64927]: pam_unix(sudo:session): session closed for user root
Dec 01 20:28:58 compute-0 sudo[65079]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwgkcvropusjyhumtvsltlpavgvdtpbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620937.9538095-296-21165742345199/AnsiballZ_timezone.py'
Dec 01 20:28:58 compute-0 sudo[65079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:28:58 compute-0 python3.9[65081]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 01 20:28:58 compute-0 systemd[1]: Starting Time & Date Service...
Dec 01 20:28:58 compute-0 systemd[1]: Started Time & Date Service.
Dec 01 20:28:59 compute-0 sudo[65079]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:00 compute-0 sudo[65235]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llamzvgqyxeugglzehruejgsgzyelsql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620940.1381085-305-40359806899452/AnsiballZ_file.py'
Dec 01 20:29:00 compute-0 sudo[65235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:00 compute-0 python3.9[65237]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:00 compute-0 sudo[65235]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:01 compute-0 sudo[65387]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrjaxgoeblubdxrkodxyhkphthocyxqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620940.984872-313-212912940143304/AnsiballZ_stat.py'
Dec 01 20:29:01 compute-0 sudo[65387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:01 compute-0 python3.9[65389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:01 compute-0 sudo[65387]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:02 compute-0 sudo[65510]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdirzliyivcqdpscumsewgxnrjurxigz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620940.984872-313-212912940143304/AnsiballZ_copy.py'
Dec 01 20:29:02 compute-0 sudo[65510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:02 compute-0 python3.9[65512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620940.984872-313-212912940143304/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:02 compute-0 sudo[65510]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:02 compute-0 sudo[65662]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtysckcuftbssatojyompjmdxzkitfrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620942.468192-328-277626684038055/AnsiballZ_stat.py'
Dec 01 20:29:02 compute-0 sudo[65662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:03 compute-0 python3.9[65664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:03 compute-0 sudo[65662]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:03 compute-0 sudo[65785]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtrbzmkdxqlfqtnendaxwsxnpuxaashz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620942.468192-328-277626684038055/AnsiballZ_copy.py'
Dec 01 20:29:03 compute-0 sudo[65785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:03 compute-0 python3.9[65787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764620942.468192-328-277626684038055/.source.yaml _original_basename=.h5pn625u follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:03 compute-0 sudo[65785]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:04 compute-0 sudo[65937]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fftelrybrhaexzdufabplpolctgapwjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620943.9956794-343-84434827064252/AnsiballZ_stat.py'
Dec 01 20:29:04 compute-0 sudo[65937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:04 compute-0 python3.9[65939]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:04 compute-0 sudo[65937]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:05 compute-0 sudo[66060]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqrpmtrtevcvselyuohbcmbxllnpake ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620943.9956794-343-84434827064252/AnsiballZ_copy.py'
Dec 01 20:29:05 compute-0 sudo[66060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:05 compute-0 python3.9[66062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620943.9956794-343-84434827064252/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:05 compute-0 sudo[66060]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:05 compute-0 sudo[66212]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcithbeasgvbnryouftttplvjabtqja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620945.4507189-358-92846112935891/AnsiballZ_command.py'
Dec 01 20:29:05 compute-0 sudo[66212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:05 compute-0 python3.9[66214]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:29:05 compute-0 sudo[66212]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:06 compute-0 sudo[66365]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csbjpvuegtyvagzetwscgozwyxkyuzws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620946.1897957-366-127098951609494/AnsiballZ_command.py'
Dec 01 20:29:06 compute-0 sudo[66365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:06 compute-0 python3.9[66367]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:29:06 compute-0 sudo[66365]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:07 compute-0 sudo[66518]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keotfycxdzyrrkbeekstiajwjqpysbns ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764620947.0532846-374-90942682220021/AnsiballZ_edpm_nftables_from_files.py'
Dec 01 20:29:07 compute-0 sudo[66518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:07 compute-0 python3[66520]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 01 20:29:07 compute-0 sudo[66518]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:08 compute-0 sudo[66670]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrfbcxnhvfbzuqmaqwpxqrdpglussiqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620948.0337477-382-90203720049895/AnsiballZ_stat.py'
Dec 01 20:29:08 compute-0 sudo[66670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:08 compute-0 python3.9[66672]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:08 compute-0 sudo[66670]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:09 compute-0 sudo[66793]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknxkhvlapsuqnbmvcugxwrssivnuzyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620948.0337477-382-90203720049895/AnsiballZ_copy.py'
Dec 01 20:29:09 compute-0 sudo[66793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:09 compute-0 python3.9[66795]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620948.0337477-382-90203720049895/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:09 compute-0 sudo[66793]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:09 compute-0 sudo[66945]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcimjcwrducbhmppnkzibjlltzoxvofk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620949.5391092-397-90802817914108/AnsiballZ_stat.py'
Dec 01 20:29:09 compute-0 sudo[66945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:10 compute-0 python3.9[66947]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:10 compute-0 sudo[66945]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:10 compute-0 sudo[67068]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvehtmsmwduvilezvhpgstratgzrdkta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620949.5391092-397-90802817914108/AnsiballZ_copy.py'
Dec 01 20:29:10 compute-0 sudo[67068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:10 compute-0 python3.9[67070]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620949.5391092-397-90802817914108/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:10 compute-0 sudo[67068]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:11 compute-0 sudo[67220]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxotddekrpwjhvjlphbgkzyyzjejpyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620950.8927057-412-74574299151061/AnsiballZ_stat.py'
Dec 01 20:29:11 compute-0 sudo[67220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:11 compute-0 python3.9[67222]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:11 compute-0 sudo[67220]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:11 compute-0 sudo[67343]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqnaepiwxukiusgmpvrkarmtclzzgkml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620950.8927057-412-74574299151061/AnsiballZ_copy.py'
Dec 01 20:29:11 compute-0 sudo[67343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:12 compute-0 python3.9[67345]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620950.8927057-412-74574299151061/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:12 compute-0 sudo[67343]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:12 compute-0 sudo[67495]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgpenyayzuyjtgujurayvwwvocnmqrzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620952.258142-427-78741439268527/AnsiballZ_stat.py'
Dec 01 20:29:12 compute-0 sudo[67495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:12 compute-0 python3.9[67497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:12 compute-0 sudo[67495]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:13 compute-0 sudo[67618]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlisgqowkkkqdfxnztenatlgvekbcfoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620952.258142-427-78741439268527/AnsiballZ_copy.py'
Dec 01 20:29:13 compute-0 sudo[67618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:13 compute-0 python3.9[67620]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620952.258142-427-78741439268527/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:13 compute-0 sudo[67618]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:14 compute-0 sudo[67770]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjikahtsxrljwqnucbissxqcibuolpzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620953.5889032-442-262066065172947/AnsiballZ_stat.py'
Dec 01 20:29:14 compute-0 sudo[67770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:14 compute-0 python3.9[67772]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:29:14 compute-0 sudo[67770]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:14 compute-0 sudo[67893]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqdlqddyfsnmpjvxoppigzqftojkogre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620953.5889032-442-262066065172947/AnsiballZ_copy.py'
Dec 01 20:29:14 compute-0 sudo[67893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:14 compute-0 python3.9[67895]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764620953.5889032-442-262066065172947/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:14 compute-0 sudo[67893]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:15 compute-0 sudo[68045]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwuskkkfghdcvzlnzmdfwgmccllnras ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620955.144105-457-126231944770502/AnsiballZ_file.py'
Dec 01 20:29:15 compute-0 sudo[68045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:15 compute-0 python3.9[68047]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:15 compute-0 sudo[68045]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:16 compute-0 sudo[68197]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjsuhcairpbuqlnzyymplwosogdimjdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620955.9394808-465-96803577474929/AnsiballZ_command.py'
Dec 01 20:29:16 compute-0 sudo[68197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:16 compute-0 python3.9[68199]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:29:16 compute-0 sudo[68197]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:17 compute-0 sudo[68356]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnueicfimcawtfwgotcaunsojgtseito ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620956.8333158-473-127262486690962/AnsiballZ_blockinfile.py'
Dec 01 20:29:17 compute-0 sudo[68356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:17 compute-0 python3.9[68358]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:17 compute-0 sudo[68356]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:18 compute-0 sudo[68509]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbiazlqnphvadmdpngfcbqpwlfqbvfto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620958.001514-482-162231709352622/AnsiballZ_file.py'
Dec 01 20:29:18 compute-0 sudo[68509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:18 compute-0 python3.9[68511]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:18 compute-0 sudo[68509]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:19 compute-0 sudo[68661]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninedvckohafpfasesilviyzwinokwhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620958.8067214-482-50356673217368/AnsiballZ_file.py'
Dec 01 20:29:19 compute-0 sudo[68661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:19 compute-0 python3.9[68663]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:19 compute-0 sudo[68661]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:20 compute-0 sudo[68813]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdugxbzcemejypjtpznilrypjkukeqyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620959.690797-497-29397339485622/AnsiballZ_mount.py'
Dec 01 20:29:20 compute-0 sudo[68813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:20 compute-0 python3.9[68815]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 01 20:29:20 compute-0 sudo[68813]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:21 compute-0 sudo[68966]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnzmctpxmiqxpgayymajbyygybzjgfaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620960.7388055-497-62443634282674/AnsiballZ_mount.py'
Dec 01 20:29:21 compute-0 sudo[68966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:21 compute-0 python3.9[68968]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 01 20:29:21 compute-0 sudo[68966]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:21 compute-0 sshd-session[59764]: Connection closed by 192.168.122.30 port 53520
Dec 01 20:29:21 compute-0 sshd-session[59761]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:29:21 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 01 20:29:21 compute-0 systemd[1]: session-15.scope: Consumed 41.780s CPU time.
Dec 01 20:29:21 compute-0 systemd-logind[796]: Session 15 logged out. Waiting for processes to exit.
Dec 01 20:29:21 compute-0 systemd-logind[796]: Removed session 15.
Dec 01 20:29:27 compute-0 sshd-session[68994]: Accepted publickey for zuul from 192.168.122.30 port 53500 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:29:27 compute-0 systemd-logind[796]: New session 16 of user zuul.
Dec 01 20:29:27 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 01 20:29:27 compute-0 sshd-session[68994]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:29:27 compute-0 sudo[69147]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujwjvacatrltctjxmlfaesyxepmaiix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620967.1936235-16-142509005021144/AnsiballZ_tempfile.py'
Dec 01 20:29:27 compute-0 sudo[69147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:28 compute-0 python3.9[69149]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 01 20:29:28 compute-0 sudo[69147]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:28 compute-0 sudo[69299]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnkpsdyhliszmusuecgclwniyfqzvezk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620968.2761524-28-168214874739634/AnsiballZ_stat.py'
Dec 01 20:29:28 compute-0 sudo[69299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:29 compute-0 python3.9[69301]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:29:29 compute-0 sudo[69299]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:29 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 01 20:29:30 compute-0 sudo[69453]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khclvgcdyhkysrlqbdemrdwcsytyejnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620969.263706-38-68410528867875/AnsiballZ_setup.py'
Dec 01 20:29:30 compute-0 sudo[69453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:30 compute-0 python3.9[69455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:29:30 compute-0 sudo[69453]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:31 compute-0 sudo[69605]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkbnytkclnznqrkemhmgrpagvgmvwjeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620970.6544776-47-211476933070811/AnsiballZ_blockinfile.py'
Dec 01 20:29:31 compute-0 sudo[69605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:31 compute-0 python3.9[69607]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEtg5PojuPAz6xTrRrBGVFaY4f616wVkbk4JWBnE7XRAZC+o8ulOAcFDcenRNZI+OUaYuJJxrh734s3f9kKVWxpwDg6JPr+yX9ca/za0oEJKz+lyqzwZuFPEQg2i7BL/FFchcrU+rHMr78OnyeUZklBpfu79VWdnJiZ+gX3wZc5No5JHVVB9Tvc7DRGpB6ChOCRA3MsAzrKxI4r4Rrd/nyByUjU4fkCuUkbwd2spVGukPVBGXoayWAnhuUgTrW+lCh3nTtEV8dOTAOjbAZXZHCV2M0dLZxFICAqD36k+PVjSu8qWp2Hvu1g8B5N53Ujzft+1vyg53YnX5lPFXj4ONYa2ODnFte0RleXaYBTC++EGdlxuJ3J3B1FqEjfbN4eDNBRo/Rz7HyjVP1GzAPBuS2dUATfEzqbaQ944c7xPX5X2wz5taiXub+QeteDNVo4qcZiS88HsXzDhljPmecgrp5J82lj1b+gzj0asqIbToGE07if4P4UscX1iXX1EQHLZk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQi31M0bDjk/+NlS8JOYX6DC83uvFAj0UguhgYwpDOl
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJYnricobpj1rSSYWt5fQI/QAzYdALTS6eg9FvCz6/m5p01CoLr/PbypPcJyrWdb9MCRlc4meQr4pHa+OSei3h4=
                                             create=True mode=0644 path=/tmp/ansible.50ef9mud state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:31 compute-0 sudo[69605]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:32 compute-0 sudo[69757]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyceyqxbnunrvhjbacnawrlsvbprnbzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620971.6640599-55-274732715703691/AnsiballZ_command.py'
Dec 01 20:29:32 compute-0 sudo[69757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:32 compute-0 python3.9[69759]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.50ef9mud' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:29:32 compute-0 sudo[69757]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:33 compute-0 sudo[69911]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phpbjedhxqxspqvdfclkzkmmakixgugu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620972.6312337-63-28001456443585/AnsiballZ_file.py'
Dec 01 20:29:33 compute-0 sudo[69911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:33 compute-0 python3.9[69913]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.50ef9mud state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:33 compute-0 sudo[69911]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:33 compute-0 sshd-session[68997]: Connection closed by 192.168.122.30 port 53500
Dec 01 20:29:33 compute-0 sshd-session[68994]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:29:33 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 01 20:29:33 compute-0 systemd[1]: session-16.scope: Consumed 4.366s CPU time.
Dec 01 20:29:33 compute-0 systemd-logind[796]: Session 16 logged out. Waiting for processes to exit.
Dec 01 20:29:33 compute-0 systemd-logind[796]: Removed session 16.
Dec 01 20:29:39 compute-0 sshd-session[69938]: Accepted publickey for zuul from 192.168.122.30 port 45090 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:29:39 compute-0 systemd-logind[796]: New session 17 of user zuul.
Dec 01 20:29:39 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 01 20:29:39 compute-0 sshd-session[69938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:29:40 compute-0 python3.9[70091]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:29:41 compute-0 sudo[70245]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtdfyuzdbkyuoxglkhbdwaopzycldtzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620980.9900405-32-186080359408928/AnsiballZ_systemd.py'
Dec 01 20:29:41 compute-0 sudo[70245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:41 compute-0 python3.9[70247]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 01 20:29:42 compute-0 sudo[70245]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:42 compute-0 sudo[70399]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqenwhaiouqlyftvdwkmehmreyfcvwpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620982.2241151-40-201935120579694/AnsiballZ_systemd.py'
Dec 01 20:29:42 compute-0 sudo[70399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:42 compute-0 python3.9[70401]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:29:42 compute-0 sudo[70399]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:43 compute-0 sudo[70552]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzrgduvrawtsvytzhiqukcklcchejxbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620983.2275848-49-36257224419713/AnsiballZ_command.py'
Dec 01 20:29:43 compute-0 sudo[70552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:43 compute-0 python3.9[70554]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:29:43 compute-0 sudo[70552]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:44 compute-0 sudo[70705]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxnfowbwcdyucjfvqtdsucncedxavosd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620984.1040082-57-222795216974715/AnsiballZ_stat.py'
Dec 01 20:29:44 compute-0 sudo[70705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:44 compute-0 python3.9[70707]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:29:44 compute-0 sudo[70705]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:45 compute-0 sudo[70859]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqyjlrxjprsvrppynbhmbodbixiimqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620985.0059445-65-4428962827411/AnsiballZ_command.py'
Dec 01 20:29:45 compute-0 sudo[70859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:45 compute-0 python3.9[70861]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:29:45 compute-0 sudo[70859]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:46 compute-0 sudo[71014]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgbopnzqbgcyfkjhdjfnccpjfrgzpdix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620985.7190146-73-30225458104732/AnsiballZ_file.py'
Dec 01 20:29:46 compute-0 sudo[71014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:46 compute-0 python3.9[71016]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:29:46 compute-0 sudo[71014]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:46 compute-0 sshd-session[69941]: Connection closed by 192.168.122.30 port 45090
Dec 01 20:29:46 compute-0 sshd-session[69938]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:29:46 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 01 20:29:46 compute-0 systemd[1]: session-17.scope: Consumed 5.257s CPU time.
Dec 01 20:29:46 compute-0 systemd-logind[796]: Session 17 logged out. Waiting for processes to exit.
Dec 01 20:29:46 compute-0 systemd-logind[796]: Removed session 17.
Dec 01 20:29:52 compute-0 sshd-session[71041]: Accepted publickey for zuul from 192.168.122.30 port 52304 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:29:52 compute-0 systemd-logind[796]: New session 18 of user zuul.
Dec 01 20:29:52 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 01 20:29:52 compute-0 sshd-session[71041]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:29:53 compute-0 python3.9[71194]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:29:55 compute-0 sudo[71349]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sruidiswsmjqvviegsijrkojzpzdaafa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620995.2546458-34-39347176754034/AnsiballZ_setup.py'
Dec 01 20:29:55 compute-0 sudo[71349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:55 compute-0 python3.9[71351]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:29:56 compute-0 sudo[71349]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:56 compute-0 sudo[71433]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvgideortzfwkzadhmqnebjdxzbneitx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764620995.2546458-34-39347176754034/AnsiballZ_dnf.py'
Dec 01 20:29:56 compute-0 sudo[71433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:29:56 compute-0 python3.9[71435]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 01 20:29:58 compute-0 sudo[71433]: pam_unix(sudo:session): session closed for user root
Dec 01 20:29:58 compute-0 python3.9[71586]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:00 compute-0 python3.9[71737]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 01 20:30:01 compute-0 python3.9[71887]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:30:01 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:30:01 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:30:01 compute-0 python3.9[72038]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:30:02 compute-0 sshd-session[71044]: Connection closed by 192.168.122.30 port 52304
Dec 01 20:30:02 compute-0 sshd-session[71041]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:30:02 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 01 20:30:02 compute-0 systemd[1]: session-18.scope: Consumed 6.519s CPU time.
Dec 01 20:30:02 compute-0 systemd-logind[796]: Session 18 logged out. Waiting for processes to exit.
Dec 01 20:30:02 compute-0 systemd-logind[796]: Removed session 18.
Dec 01 20:30:10 compute-0 sshd-session[72063]: Accepted publickey for zuul from 38.102.83.9 port 39426 ssh2: RSA SHA256:oog7MteKjkTJ4LxwhsVGQb4CwQfo2OF07ZrpoP0w1bM
Dec 01 20:30:10 compute-0 systemd-logind[796]: New session 19 of user zuul.
Dec 01 20:30:10 compute-0 systemd[1]: Started Session 19 of User zuul.
Dec 01 20:30:10 compute-0 sshd-session[72063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:30:10 compute-0 sudo[72139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybnkmywvnkwbbquafddvzhrjkzvubqa ; /usr/bin/python3'
Dec 01 20:30:10 compute-0 sudo[72139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:10 compute-0 useradd[72143]: new group: name=ceph-admin, GID=42478
Dec 01 20:30:10 compute-0 useradd[72143]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 01 20:30:10 compute-0 sudo[72139]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:10 compute-0 sudo[72225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkzopphinvhpyjqggtfidzklccohdixd ; /usr/bin/python3'
Dec 01 20:30:10 compute-0 sudo[72225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:11 compute-0 sudo[72225]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:11 compute-0 sudo[72298]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjnuewlwrvkconmshznpldhvgrxiqgw ; /usr/bin/python3'
Dec 01 20:30:11 compute-0 sudo[72298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:11 compute-0 sudo[72298]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:11 compute-0 sudo[72348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aerpdkmtxkmdwwmbbpvslnlmxcolzpbn ; /usr/bin/python3'
Dec 01 20:30:11 compute-0 sudo[72348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:12 compute-0 sudo[72348]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:12 compute-0 sudo[72374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyvrbcdgetsxtrozlnjmbypwscrsajuu ; /usr/bin/python3'
Dec 01 20:30:12 compute-0 sudo[72374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:12 compute-0 sudo[72374]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:12 compute-0 sudo[72400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwdtonapuechjrskukpizfgcomiekfqd ; /usr/bin/python3'
Dec 01 20:30:12 compute-0 sudo[72400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:12 compute-0 sudo[72400]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:13 compute-0 sudo[72426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhzajxaxuefwnnveesynkgphhncjacq ; /usr/bin/python3'
Dec 01 20:30:13 compute-0 sudo[72426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:13 compute-0 sudo[72426]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:13 compute-0 sudo[72504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzcnfnkwyodmqgogrbbywpvjbbuxblry ; /usr/bin/python3'
Dec 01 20:30:13 compute-0 sudo[72504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:13 compute-0 sudo[72504]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:14 compute-0 sudo[72577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrgqmislrnmeydvzhrziqgzalxplfxqh ; /usr/bin/python3'
Dec 01 20:30:14 compute-0 sudo[72577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:14 compute-0 sudo[72577]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:14 compute-0 sudo[72679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aurioutbpmaixcubnjybftffurxvvxye ; /usr/bin/python3'
Dec 01 20:30:14 compute-0 sudo[72679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:15 compute-0 sudo[72679]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:15 compute-0 sudo[72752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsbmmxvxsqrriwtonnjrbgerpdltvstt ; /usr/bin/python3'
Dec 01 20:30:15 compute-0 sudo[72752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:15 compute-0 sudo[72752]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:16 compute-0 sudo[72802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrmqlleyzqnmeynjxjzkdmpekjzrzvkh ; /usr/bin/python3'
Dec 01 20:30:16 compute-0 sudo[72802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:16 compute-0 python3[72804]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:30:17 compute-0 chronyd[59280]: Selected source 23.133.168.244 (pool.ntp.org)
Dec 01 20:30:17 compute-0 sudo[72802]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:17 compute-0 sudo[72898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioyywxfercmvyugopvkncvqyslfdbnur ; /usr/bin/python3'
Dec 01 20:30:17 compute-0 sudo[72898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:17 compute-0 python3[72900]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 01 20:30:19 compute-0 sudo[72898]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:19 compute-0 sudo[72925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oejcfclxfewlzgnrsktlivqyguydhqux ; /usr/bin/python3'
Dec 01 20:30:19 compute-0 sudo[72925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:19 compute-0 python3[72927]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:30:19 compute-0 sudo[72925]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:19 compute-0 sudo[72951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oharfbovqhkqrpzlvuwpfrvbafdtdzue ; /usr/bin/python3'
Dec 01 20:30:19 compute-0 sudo[72951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:19 compute-0 python3[72953]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:19 compute-0 kernel: loop: module loaded
Dec 01 20:30:19 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 01 20:30:19 compute-0 sudo[72951]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:19 compute-0 sudo[72986]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlxdqthjikudpodrgupoqbuignctuwom ; /usr/bin/python3'
Dec 01 20:30:19 compute-0 sudo[72986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:19 compute-0 python3[72988]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:20 compute-0 lvm[72991]: PV /dev/loop3 not used.
Dec 01 20:30:20 compute-0 lvm[73000]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:30:20 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 01 20:30:20 compute-0 sudo[72986]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:20 compute-0 lvm[73002]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 01 20:30:20 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 01 20:30:20 compute-0 sudo[73078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugfeoxfazdfsdravjxoshnymwipvxxxn ; /usr/bin/python3'
Dec 01 20:30:20 compute-0 sudo[73078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:20 compute-0 python3[73080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:30:20 compute-0 sudo[73078]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:20 compute-0 sudo[73151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-talpcpcqldpuujjafxunphgjqeqbkdth ; /usr/bin/python3'
Dec 01 20:30:20 compute-0 sudo[73151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:20 compute-0 python3[73153]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621020.3494864-36437-150966850111619/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:30:20 compute-0 sudo[73151]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:21 compute-0 sudo[73201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmikcmsddflmmnyswntjhymthujxcasb ; /usr/bin/python3'
Dec 01 20:30:21 compute-0 sudo[73201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:21 compute-0 python3[73203]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:30:21 compute-0 systemd[1]: Reloading.
Dec 01 20:30:21 compute-0 systemd-sysv-generator[73236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:30:21 compute-0 systemd-rc-local-generator[73233]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:30:21 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 01 20:30:21 compute-0 bash[73244]: /dev/loop3: [64513]:4327943 (/var/lib/ceph-osd-0.img)
Dec 01 20:30:21 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 01 20:30:21 compute-0 lvm[73245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:30:21 compute-0 lvm[73245]: VG ceph_vg0 finished
Dec 01 20:30:21 compute-0 sudo[73201]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:22 compute-0 sudo[73269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkvhkevdsirupavygtpjxqbuyfdzsvbq ; /usr/bin/python3'
Dec 01 20:30:22 compute-0 sudo[73269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:22 compute-0 python3[73271]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 01 20:30:23 compute-0 sudo[73269]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:23 compute-0 sudo[73296]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirkprxlfhnotgadmpgsdpgetikaxfzl ; /usr/bin/python3'
Dec 01 20:30:23 compute-0 sudo[73296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:23 compute-0 python3[73298]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:30:23 compute-0 sudo[73296]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:23 compute-0 sudo[73322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfufmvquqrdsdvewiyariaoddxtmtsls ; /usr/bin/python3'
Dec 01 20:30:23 compute-0 sudo[73322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:24 compute-0 python3[73324]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:24 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec 01 20:30:24 compute-0 sudo[73322]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:24 compute-0 sudo[73354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jotrclyvsztkdzwyrqhogqkqfxdbdloc ; /usr/bin/python3'
Dec 01 20:30:24 compute-0 sudo[73354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:24 compute-0 python3[73356]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:24 compute-0 lvm[73359]: PV /dev/loop4 not used.
Dec 01 20:30:24 compute-0 lvm[73361]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:30:24 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 01 20:30:24 compute-0 lvm[73367]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 01 20:30:24 compute-0 lvm[73372]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:30:24 compute-0 lvm[73372]: VG ceph_vg1 finished
Dec 01 20:30:24 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 01 20:30:24 compute-0 sudo[73354]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:24 compute-0 sudo[73448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igddyovbwtgwwrcvyyowzvzeiqumfmvy ; /usr/bin/python3'
Dec 01 20:30:24 compute-0 sudo[73448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:25 compute-0 python3[73450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:30:25 compute-0 sudo[73448]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:25 compute-0 sudo[73521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iznzrpzidmnjyfsjdztxycxhxfshtnyw ; /usr/bin/python3'
Dec 01 20:30:25 compute-0 sudo[73521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:25 compute-0 python3[73523]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621024.7665205-36464-39674229647254/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:30:25 compute-0 sudo[73521]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:25 compute-0 sudo[73571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatnaufrmiyipsaxsyasajpwsgzrycnj ; /usr/bin/python3'
Dec 01 20:30:25 compute-0 sudo[73571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:25 compute-0 python3[73573]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:30:25 compute-0 systemd[1]: Reloading.
Dec 01 20:30:25 compute-0 systemd-rc-local-generator[73603]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:30:25 compute-0 systemd-sysv-generator[73606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:30:26 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 01 20:30:26 compute-0 bash[73613]: /dev/loop4: [64513]:4327981 (/var/lib/ceph-osd-1.img)
Dec 01 20:30:26 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 01 20:30:26 compute-0 lvm[73614]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:30:26 compute-0 lvm[73614]: VG ceph_vg1 finished
Dec 01 20:30:26 compute-0 sudo[73571]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:26 compute-0 sudo[73638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbvnsbdqgwbabjxmmegznqudbpkjmmyo ; /usr/bin/python3'
Dec 01 20:30:26 compute-0 sudo[73638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:26 compute-0 python3[73640]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 01 20:30:27 compute-0 sudo[73638]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:27 compute-0 sudo[73665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhwkqbaozrncbejajognwdustytmkele ; /usr/bin/python3'
Dec 01 20:30:27 compute-0 sudo[73665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:27 compute-0 python3[73667]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:30:27 compute-0 sudo[73665]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:28 compute-0 sudo[73691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctydedomniabedcrlteermkurmgnmaah ; /usr/bin/python3'
Dec 01 20:30:28 compute-0 sudo[73691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:28 compute-0 python3[73693]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:28 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec 01 20:30:28 compute-0 sudo[73691]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:28 compute-0 sudo[73723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfhzgozdfpadsjwaoajitvhdwviphahu ; /usr/bin/python3'
Dec 01 20:30:28 compute-0 sudo[73723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:28 compute-0 python3[73725]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:28 compute-0 lvm[73728]: PV /dev/loop5 not used.
Dec 01 20:30:28 compute-0 lvm[73737]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:30:29 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 01 20:30:29 compute-0 sudo[73723]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:29 compute-0 lvm[73739]:   1 logical volume(s) in volume group "ceph_vg2" now active
Dec 01 20:30:29 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 01 20:30:29 compute-0 sudo[73815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rshicpedkipoubilpiibjzvaupzblknv ; /usr/bin/python3'
Dec 01 20:30:29 compute-0 sudo[73815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:29 compute-0 python3[73817]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:30:29 compute-0 sudo[73815]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:29 compute-0 sudo[73888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qviifngzudtkvtpmwksdhzxnfwwbixyn ; /usr/bin/python3'
Dec 01 20:30:29 compute-0 sudo[73888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:29 compute-0 python3[73890]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621029.1232-36491-46657793912728/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:30:29 compute-0 sudo[73888]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:29 compute-0 sudo[73938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jddxbcewaizfvudqsvdmwalyaoogqevl ; /usr/bin/python3'
Dec 01 20:30:29 compute-0 sudo[73938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:30 compute-0 python3[73940]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:30:30 compute-0 systemd[1]: Reloading.
Dec 01 20:30:30 compute-0 systemd-sysv-generator[73973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:30:30 compute-0 systemd-rc-local-generator[73967]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:30:30 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 01 20:30:30 compute-0 bash[73980]: /dev/loop5: [64513]:4327991 (/var/lib/ceph-osd-2.img)
Dec 01 20:30:30 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 01 20:30:30 compute-0 sudo[73938]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:30 compute-0 lvm[73981]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:30:30 compute-0 lvm[73981]: VG ceph_vg2 finished
Dec 01 20:30:32 compute-0 python3[74006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:30:34 compute-0 sudo[74097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqibbdsmivhpkpnwoczjwbzuutztjruq ; /usr/bin/python3'
Dec 01 20:30:34 compute-0 sudo[74097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:34 compute-0 python3[74099]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 01 20:30:36 compute-0 sudo[74097]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:36 compute-0 sudo[74154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pthjnmgijoeznowysyxyhrvodzxrmjdo ; /usr/bin/python3'
Dec 01 20:30:36 compute-0 sudo[74154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:37 compute-0 python3[74156]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 01 20:30:39 compute-0 groupadd[74166]: group added to /etc/group: name=cephadm, GID=992
Dec 01 20:30:39 compute-0 groupadd[74166]: group added to /etc/gshadow: name=cephadm
Dec 01 20:30:40 compute-0 groupadd[74166]: new group: name=cephadm, GID=992
Dec 01 20:30:40 compute-0 useradd[74173]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 01 20:30:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:30:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:30:40 compute-0 sudo[74154]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:30:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:30:40 compute-0 systemd[1]: run-r510ae8cda85e438883a9c1a294e54100.service: Deactivated successfully.
Dec 01 20:30:40 compute-0 sudo[74273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zszseeptkgngosydmvfenppwvtqdemrk ; /usr/bin/python3'
Dec 01 20:30:40 compute-0 sudo[74273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:40 compute-0 python3[74275]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:30:40 compute-0 sudo[74273]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:40 compute-0 sudo[74301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpxdgfbdwciwdxdbfzycnrngsvjdria ; /usr/bin/python3'
Dec 01 20:30:41 compute-0 sudo[74301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:41 compute-0 python3[74303]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:30:41 compute-0 sudo[74301]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:41 compute-0 sudo[74339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqihzarbkcchsbtahoxvagcdeypywcbg ; /usr/bin/python3'
Dec 01 20:30:41 compute-0 sudo[74339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:41 compute-0 python3[74341]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:30:41 compute-0 sudo[74339]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:42 compute-0 sudo[74365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbgmslccagwvcdinjxljnjojrmndqrwl ; /usr/bin/python3'
Dec 01 20:30:42 compute-0 sudo[74365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:42 compute-0 python3[74367]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:30:42 compute-0 sudo[74365]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:42 compute-0 sudo[74443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgqjpdcxllhtmgjoijjdiprmtqyzgari ; /usr/bin/python3'
Dec 01 20:30:42 compute-0 sudo[74443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:42 compute-0 python3[74445]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:30:42 compute-0 sudo[74443]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:43 compute-0 sudo[74516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apxkiekcdowvpjflbnpehbwggzwuudpz ; /usr/bin/python3'
Dec 01 20:30:43 compute-0 sudo[74516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:43 compute-0 python3[74518]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621042.6236022-36639-84882470555814/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:30:43 compute-0 sudo[74516]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:43 compute-0 sudo[74618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtdojaixrqizphbqaatnbctgbqofdszl ; /usr/bin/python3'
Dec 01 20:30:43 compute-0 sudo[74618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:43 compute-0 python3[74620]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:30:43 compute-0 sudo[74618]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:44 compute-0 sudo[74691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oifktpsktpwmfjvqbcnmmxcqppwnquxh ; /usr/bin/python3'
Dec 01 20:30:44 compute-0 sudo[74691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:44 compute-0 python3[74693]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621043.6780248-36657-82278630700087/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:30:44 compute-0 sudo[74691]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:44 compute-0 sudo[74741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwutjumrtinerseqhqrbvkeudcycekot ; /usr/bin/python3'
Dec 01 20:30:44 compute-0 sudo[74741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:44 compute-0 python3[74743]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:30:44 compute-0 sudo[74741]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:44 compute-0 sudo[74769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykrchyquomptczkcmgqgkofajshjcyyo ; /usr/bin/python3'
Dec 01 20:30:44 compute-0 sudo[74769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:44 compute-0 python3[74771]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:30:44 compute-0 sudo[74769]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:45 compute-0 sudo[74797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjepguyejydzmpmrsdspwzwmtdvkdzos ; /usr/bin/python3'
Dec 01 20:30:45 compute-0 sudo[74797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:45 compute-0 python3[74799]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:30:45 compute-0 sudo[74797]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:45 compute-0 sudo[74825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yepcsoqysojmilfygotmdmtoirodokyd ; /usr/bin/python3'
Dec 01 20:30:45 compute-0 sudo[74825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:30:45 compute-0 python3[74827]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:30:45 compute-0 sshd-session[74831]: Accepted publickey for ceph-admin from 192.168.122.100 port 56638 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:30:45 compute-0 systemd-logind[796]: New session 20 of user ceph-admin.
Dec 01 20:30:45 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 01 20:30:45 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 01 20:30:45 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 01 20:30:45 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 01 20:30:45 compute-0 systemd[74835]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:30:45 compute-0 systemd[74835]: Queued start job for default target Main User Target.
Dec 01 20:30:45 compute-0 systemd[74835]: Created slice User Application Slice.
Dec 01 20:30:45 compute-0 systemd[74835]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 01 20:30:45 compute-0 systemd[74835]: Started Daily Cleanup of User's Temporary Directories.
Dec 01 20:30:45 compute-0 systemd[74835]: Reached target Paths.
Dec 01 20:30:45 compute-0 systemd[74835]: Reached target Timers.
Dec 01 20:30:45 compute-0 systemd[74835]: Starting D-Bus User Message Bus Socket...
Dec 01 20:30:45 compute-0 systemd[74835]: Starting Create User's Volatile Files and Directories...
Dec 01 20:30:45 compute-0 systemd[74835]: Finished Create User's Volatile Files and Directories.
Dec 01 20:30:45 compute-0 systemd[74835]: Listening on D-Bus User Message Bus Socket.
Dec 01 20:30:45 compute-0 systemd[74835]: Reached target Sockets.
Dec 01 20:30:45 compute-0 systemd[74835]: Reached target Basic System.
Dec 01 20:30:45 compute-0 systemd[74835]: Reached target Main User Target.
Dec 01 20:30:45 compute-0 systemd[74835]: Startup finished in 107ms.
Dec 01 20:30:45 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 01 20:30:45 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Dec 01 20:30:45 compute-0 sshd-session[74831]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:30:46 compute-0 sudo[74852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 01 20:30:46 compute-0 sudo[74852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:30:46 compute-0 sudo[74852]: pam_unix(sudo:session): session closed for user root
Dec 01 20:30:46 compute-0 sshd-session[74851]: Received disconnect from 192.168.122.100 port 56638:11: disconnected by user
Dec 01 20:30:46 compute-0 sshd-session[74851]: Disconnected from user ceph-admin 192.168.122.100 port 56638
Dec 01 20:30:46 compute-0 sshd-session[74831]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 01 20:30:46 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Dec 01 20:30:46 compute-0 systemd-logind[796]: Session 20 logged out. Waiting for processes to exit.
Dec 01 20:30:46 compute-0 systemd-logind[796]: Removed session 20.
Dec 01 20:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1816407731-lower\x2dmapped.mount: Deactivated successfully.
Dec 01 20:30:56 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 01 20:30:57 compute-0 systemd[74835]: Activating special unit Exit the Session...
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped target Main User Target.
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped target Basic System.
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped target Paths.
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped target Sockets.
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped target Timers.
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 01 20:30:57 compute-0 systemd[74835]: Closed D-Bus User Message Bus Socket.
Dec 01 20:30:57 compute-0 systemd[74835]: Stopped Create User's Volatile Files and Directories.
Dec 01 20:30:57 compute-0 systemd[74835]: Removed slice User Application Slice.
Dec 01 20:30:57 compute-0 systemd[74835]: Reached target Shutdown.
Dec 01 20:30:57 compute-0 systemd[74835]: Finished Exit the Session.
Dec 01 20:30:57 compute-0 systemd[74835]: Reached target Exit the Session.
Dec 01 20:30:57 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 01 20:30:57 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 01 20:30:57 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 01 20:30:57 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 01 20:30:57 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 01 20:30:57 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 01 20:30:57 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 01 20:31:04 compute-0 podman[74931]: 2025-12-01 20:31:04.766473947 +0000 UTC m=+18.380576943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:04 compute-0 podman[74993]: 2025-12-01 20:31:04.830871283 +0000 UTC m=+0.042545336 container create a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722 (image=quay.io/ceph/ceph:v20, name=adoring_faraday, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:31:04 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 01 20:31:04 compute-0 systemd[1]: Started libpod-conmon-a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722.scope.
Dec 01 20:31:04 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:04 compute-0 podman[74993]: 2025-12-01 20:31:04.809498208 +0000 UTC m=+0.021172261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:04 compute-0 podman[74993]: 2025-12-01 20:31:04.9158179 +0000 UTC m=+0.127491963 container init a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722 (image=quay.io/ceph/ceph:v20, name=adoring_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:04 compute-0 podman[74993]: 2025-12-01 20:31:04.924232263 +0000 UTC m=+0.135906316 container start a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722 (image=quay.io/ceph/ceph:v20, name=adoring_faraday, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 01 20:31:04 compute-0 podman[74993]: 2025-12-01 20:31:04.927804704 +0000 UTC m=+0.139478757 container attach a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722 (image=quay.io/ceph/ceph:v20, name=adoring_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:31:05 compute-0 adoring_faraday[75007]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 01 20:31:05 compute-0 systemd[1]: libpod-a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722.scope: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[74993]: 2025-12-01 20:31:05.024387613 +0000 UTC m=+0.236061666 container died a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722 (image=quay.io/ceph/ceph:v20, name=adoring_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-561061f4f0af48c83d8fe83c01f014899b6413380e029b7ec3103ee2a1a24fa6-merged.mount: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[74993]: 2025-12-01 20:31:05.061096017 +0000 UTC m=+0.272770070 container remove a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722 (image=quay.io/ceph/ceph:v20, name=adoring_faraday, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:05 compute-0 systemd[1]: libpod-conmon-a3d9ab62dd9dfe017c8ffa50af0a7f3ebef2e7ad5d21567c67d7303c7aa86722.scope: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[75024]: 2025-12-01 20:31:05.119524037 +0000 UTC m=+0.040549794 container create 8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14 (image=quay.io/ceph/ceph:v20, name=optimistic_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:05 compute-0 systemd[1]: Started libpod-conmon-8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14.scope.
Dec 01 20:31:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:05 compute-0 podman[75024]: 2025-12-01 20:31:05.1757901 +0000 UTC m=+0.096815887 container init 8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14 (image=quay.io/ceph/ceph:v20, name=optimistic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 01 20:31:05 compute-0 podman[75024]: 2025-12-01 20:31:05.181931502 +0000 UTC m=+0.102957299 container start 8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14 (image=quay.io/ceph/ceph:v20, name=optimistic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:31:05 compute-0 optimistic_pasteur[75040]: 167 167
Dec 01 20:31:05 compute-0 systemd[1]: libpod-8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14.scope: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[75024]: 2025-12-01 20:31:05.1863569 +0000 UTC m=+0.107382667 container attach 8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14 (image=quay.io/ceph/ceph:v20, name=optimistic_pasteur, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:31:05 compute-0 podman[75024]: 2025-12-01 20:31:05.187250128 +0000 UTC m=+0.108275925 container died 8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14 (image=quay.io/ceph/ceph:v20, name=optimistic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:05 compute-0 podman[75024]: 2025-12-01 20:31:05.101820976 +0000 UTC m=+0.022846823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:05 compute-0 podman[75024]: 2025-12-01 20:31:05.228686729 +0000 UTC m=+0.149712496 container remove 8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14 (image=quay.io/ceph/ceph:v20, name=optimistic_pasteur, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:31:05 compute-0 systemd[1]: libpod-conmon-8c7df24fe70938327fd8eb075c29a3d21d2351b0c924cbcc98ed515bb3e4ad14.scope: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[75059]: 2025-12-01 20:31:05.289538595 +0000 UTC m=+0.040058369 container create 7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32 (image=quay.io/ceph/ceph:v20, name=determined_hoover, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:31:05 compute-0 systemd[1]: Started libpod-conmon-7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32.scope.
Dec 01 20:31:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:05 compute-0 podman[75059]: 2025-12-01 20:31:05.366109091 +0000 UTC m=+0.116628875 container init 7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32 (image=quay.io/ceph/ceph:v20, name=determined_hoover, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:05 compute-0 podman[75059]: 2025-12-01 20:31:05.271757801 +0000 UTC m=+0.022277595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:05 compute-0 podman[75059]: 2025-12-01 20:31:05.371124937 +0000 UTC m=+0.121644711 container start 7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32 (image=quay.io/ceph/ceph:v20, name=determined_hoover, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:31:05 compute-0 determined_hoover[75075]: AQAJ+y1p6ewzFxAAXv0F29RfeF4kZTRFbZi7xw==
Dec 01 20:31:05 compute-0 systemd[1]: libpod-7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32.scope: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[75059]: 2025-12-01 20:31:05.461001487 +0000 UTC m=+0.211521271 container attach 7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32 (image=quay.io/ceph/ceph:v20, name=determined_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:05 compute-0 podman[75059]: 2025-12-01 20:31:05.461607326 +0000 UTC m=+0.212127120 container died 7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32 (image=quay.io/ceph/ceph:v20, name=determined_hoover, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 01 20:31:05 compute-0 podman[75059]: 2025-12-01 20:31:05.495388469 +0000 UTC m=+0.245908243 container remove 7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32 (image=quay.io/ceph/ceph:v20, name=determined_hoover, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:31:05 compute-0 systemd[1]: libpod-conmon-7c320d7997ddad0045cdab65478c5e8ce43b687deb9599c42d3ed87f8ccadd32.scope: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[75097]: 2025-12-01 20:31:05.530543074 +0000 UTC m=+0.019595302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:05 compute-0 podman[75097]: 2025-12-01 20:31:05.699913651 +0000 UTC m=+0.188965869 container create 167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954 (image=quay.io/ceph/ceph:v20, name=loving_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:05 compute-0 systemd[1]: Started libpod-conmon-167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954.scope.
Dec 01 20:31:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:05 compute-0 podman[75097]: 2025-12-01 20:31:05.778647665 +0000 UTC m=+0.267699883 container init 167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954 (image=quay.io/ceph/ceph:v20, name=loving_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:31:05 compute-0 podman[75097]: 2025-12-01 20:31:05.782961379 +0000 UTC m=+0.272013577 container start 167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954 (image=quay.io/ceph/ceph:v20, name=loving_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:31:05 compute-0 podman[75097]: 2025-12-01 20:31:05.786261002 +0000 UTC m=+0.275313220 container attach 167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954 (image=quay.io/ceph/ceph:v20, name=loving_bose, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:05 compute-0 loving_bose[75113]: AQAJ+y1pNJ+6LxAARo2Gfv6D6kQEnkQ7zjktZA==
Dec 01 20:31:05 compute-0 systemd[1]: libpod-167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954.scope: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[75097]: 2025-12-01 20:31:05.804241763 +0000 UTC m=+0.293293971 container died 167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954 (image=quay.io/ceph/ceph:v20, name=loving_bose, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 01 20:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-02f90cd1062c8a3343a79b4f6701ccef14833c44b869b3177ce22ca633f58822-merged.mount: Deactivated successfully.
Dec 01 20:31:05 compute-0 podman[75097]: 2025-12-01 20:31:05.935496402 +0000 UTC m=+0.424548600 container remove 167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954 (image=quay.io/ceph/ceph:v20, name=loving_bose, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:05 compute-0 systemd[1]: libpod-conmon-167fa0aa98ad37edfa4cd32ed11d42df5e5fb7e896f4148ac3d90585209ae954.scope: Deactivated successfully.
Dec 01 20:31:06 compute-0 podman[75134]: 2025-12-01 20:31:06.001314272 +0000 UTC m=+0.043223997 container create 04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c (image=quay.io/ceph/ceph:v20, name=optimistic_curie, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:06 compute-0 systemd[1]: Started libpod-conmon-04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c.scope.
Dec 01 20:31:06 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:06 compute-0 podman[75134]: 2025-12-01 20:31:05.985258632 +0000 UTC m=+0.027168417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:06 compute-0 podman[75134]: 2025-12-01 20:31:06.303914052 +0000 UTC m=+0.345823787 container init 04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c (image=quay.io/ceph/ceph:v20, name=optimistic_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:31:06 compute-0 podman[75134]: 2025-12-01 20:31:06.309082232 +0000 UTC m=+0.350991957 container start 04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c (image=quay.io/ceph/ceph:v20, name=optimistic_curie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:31:06 compute-0 optimistic_curie[75150]: AQAK+y1puZB/ExAAvoSfdulTNEGdQzp8r1Yq5A==
Dec 01 20:31:06 compute-0 systemd[1]: libpod-04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c.scope: Deactivated successfully.
Dec 01 20:31:09 compute-0 podman[75134]: 2025-12-01 20:31:09.673523485 +0000 UTC m=+3.715433250 container attach 04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c (image=quay.io/ceph/ceph:v20, name=optimistic_curie, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:09 compute-0 podman[75134]: 2025-12-01 20:31:09.675255369 +0000 UTC m=+3.717165134 container died 04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c (image=quay.io/ceph/ceph:v20, name=optimistic_curie, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a86d1b623f905b864d93ae1f4db55682c90b0b5f2c515f4a6fdfe801a6366dcf-merged.mount: Deactivated successfully.
Dec 01 20:31:09 compute-0 podman[75134]: 2025-12-01 20:31:09.74075799 +0000 UTC m=+3.782667725 container remove 04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c (image=quay.io/ceph/ceph:v20, name=optimistic_curie, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 01 20:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:09 compute-0 systemd[1]: libpod-conmon-04591b1b1fae9f691b8c9c70e3207d285c72001af744fc10ad7227026aa2f55c.scope: Deactivated successfully.
Dec 01 20:31:09 compute-0 podman[75172]: 2025-12-01 20:31:09.821355261 +0000 UTC m=+0.054141549 container create de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97 (image=quay.io/ceph/ceph:v20, name=sad_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 01 20:31:09 compute-0 systemd[1]: Started libpod-conmon-de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97.scope.
Dec 01 20:31:09 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e010d730661fff1b286bc70821fee8c17c5d0953e9bb6d5d955f34f7113dc4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:09 compute-0 podman[75172]: 2025-12-01 20:31:09.794191825 +0000 UTC m=+0.026978123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:09 compute-0 podman[75172]: 2025-12-01 20:31:09.89708568 +0000 UTC m=+0.129871948 container init de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97 (image=quay.io/ceph/ceph:v20, name=sad_beaver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 01 20:31:09 compute-0 podman[75172]: 2025-12-01 20:31:09.902264802 +0000 UTC m=+0.135051050 container start de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97 (image=quay.io/ceph/ceph:v20, name=sad_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:31:09 compute-0 podman[75172]: 2025-12-01 20:31:09.906312927 +0000 UTC m=+0.139099195 container attach de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97 (image=quay.io/ceph/ceph:v20, name=sad_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:09 compute-0 sad_beaver[75189]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 01 20:31:09 compute-0 sad_beaver[75189]: setting min_mon_release = tentacle
Dec 01 20:31:09 compute-0 sad_beaver[75189]: /usr/bin/monmaptool: set fsid to dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:09 compute-0 sad_beaver[75189]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 01 20:31:09 compute-0 systemd[1]: libpod-de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97.scope: Deactivated successfully.
Dec 01 20:31:09 compute-0 podman[75196]: 2025-12-01 20:31:09.964682737 +0000 UTC m=+0.023347999 container died de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97 (image=quay.io/ceph/ceph:v20, name=sad_beaver, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:10 compute-0 podman[75196]: 2025-12-01 20:31:10.001737861 +0000 UTC m=+0.060403073 container remove de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97 (image=quay.io/ceph/ceph:v20, name=sad_beaver, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:31:10 compute-0 systemd[1]: libpod-conmon-de5f24341cbd35ab88d6d425da691777cf239e24b2ae5d8621db2a911dbd9d97.scope: Deactivated successfully.
Dec 01 20:31:10 compute-0 podman[75211]: 2025-12-01 20:31:10.07871323 +0000 UTC m=+0.051579359 container create 0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45 (image=quay.io/ceph/ceph:v20, name=relaxed_shamir, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:10 compute-0 systemd[1]: Started libpod-conmon-0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45.scope.
Dec 01 20:31:10 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203e69be37b3ad8b72141cf3082a76936667127de46edb651cb359f34c0f3bf5/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203e69be37b3ad8b72141cf3082a76936667127de46edb651cb359f34c0f3bf5/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203e69be37b3ad8b72141cf3082a76936667127de46edb651cb359f34c0f3bf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203e69be37b3ad8b72141cf3082a76936667127de46edb651cb359f34c0f3bf5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:10 compute-0 podman[75211]: 2025-12-01 20:31:10.050393267 +0000 UTC m=+0.023259486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:10 compute-0 podman[75211]: 2025-12-01 20:31:10.15446781 +0000 UTC m=+0.127333949 container init 0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45 (image=quay.io/ceph/ceph:v20, name=relaxed_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 01 20:31:10 compute-0 podman[75211]: 2025-12-01 20:31:10.163317806 +0000 UTC m=+0.136183945 container start 0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45 (image=quay.io/ceph/ceph:v20, name=relaxed_shamir, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:10 compute-0 podman[75211]: 2025-12-01 20:31:10.166057201 +0000 UTC m=+0.138923350 container attach 0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45 (image=quay.io/ceph/ceph:v20, name=relaxed_shamir, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:10 compute-0 systemd[1]: libpod-0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45.scope: Deactivated successfully.
Dec 01 20:31:10 compute-0 podman[75211]: 2025-12-01 20:31:10.258542013 +0000 UTC m=+0.231408192 container died 0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45 (image=quay.io/ceph/ceph:v20, name=relaxed_shamir, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:31:10 compute-0 podman[75211]: 2025-12-01 20:31:10.308284273 +0000 UTC m=+0.281150422 container remove 0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45 (image=quay.io/ceph/ceph:v20, name=relaxed_shamir, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 01 20:31:10 compute-0 systemd[1]: libpod-conmon-0bc3797af5c5b1ebf8512ac882892ee3ba878e6002042fdc288848c4b21eda45.scope: Deactivated successfully.
Dec 01 20:31:10 compute-0 systemd[1]: Reloading.
Dec 01 20:31:10 compute-0 systemd-sysv-generator[75296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:10 compute-0 systemd-rc-local-generator[75293]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:10 compute-0 systemd[1]: Reloading.
Dec 01 20:31:10 compute-0 systemd-sysv-generator[75337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:10 compute-0 systemd-rc-local-generator[75334]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:10 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 01 20:31:10 compute-0 systemd[1]: Reloading.
Dec 01 20:31:10 compute-0 systemd-rc-local-generator[75373]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:10 compute-0 systemd-sysv-generator[75377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:11 compute-0 systemd[1]: Reached target Ceph cluster dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:11 compute-0 systemd[1]: Reloading.
Dec 01 20:31:11 compute-0 systemd-rc-local-generator[75409]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:11 compute-0 systemd-sysv-generator[75413]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:11 compute-0 systemd[1]: Reloading.
Dec 01 20:31:11 compute-0 systemd-rc-local-generator[75450]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:11 compute-0 systemd-sysv-generator[75453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:11 compute-0 systemd[1]: Created slice Slice /system/ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:11 compute-0 systemd[1]: Reached target System Time Set.
Dec 01 20:31:11 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 01 20:31:11 compute-0 systemd[1]: Starting Ceph mon.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:11 compute-0 podman[75507]: 2025-12-01 20:31:11.965935593 +0000 UTC m=+0.035140177 container create c0db23a316b60d9b1f84bc59850ae38fd5559aefcf15970ec1e9b5cdd5bf727b (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4369263d155a112d7e1df95e2242a66b41d4dce8c0ef50dcd3b740201824f437/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4369263d155a112d7e1df95e2242a66b41d4dce8c0ef50dcd3b740201824f437/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4369263d155a112d7e1df95e2242a66b41d4dce8c0ef50dcd3b740201824f437/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4369263d155a112d7e1df95e2242a66b41d4dce8c0ef50dcd3b740201824f437/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 podman[75507]: 2025-12-01 20:31:12.029587186 +0000 UTC m=+0.098791790 container init c0db23a316b60d9b1f84bc59850ae38fd5559aefcf15970ec1e9b5cdd5bf727b (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:12 compute-0 podman[75507]: 2025-12-01 20:31:12.037665517 +0000 UTC m=+0.106870091 container start c0db23a316b60d9b1f84bc59850ae38fd5559aefcf15970ec1e9b5cdd5bf727b (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:12 compute-0 bash[75507]: c0db23a316b60d9b1f84bc59850ae38fd5559aefcf15970ec1e9b5cdd5bf727b
Dec 01 20:31:12 compute-0 podman[75507]: 2025-12-01 20:31:11.95013715 +0000 UTC m=+0.019341724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:12 compute-0 systemd[1]: Started Ceph mon.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:12 compute-0 ceph-mon[75527]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: pidfile_write: ignore empty --pid-file
Dec 01 20:31:12 compute-0 ceph-mon[75527]: load: jerasure load: lrc 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Git sha 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: DB SUMMARY
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: DB Session ID:  5AUAU52MOW747FKSD2D8
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                                     Options.env: 0x558e90c5e440
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                                Options.info_log: 0x558e92d3d3e0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                                 Options.wal_dir: 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                    Options.write_buffer_manager: 0x558e92cbc140
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                               Options.row_cache: None
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                              Options.wal_filter: None
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.wal_compression: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.max_background_jobs: 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.max_total_wal_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:       Options.compaction_readahead_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Compression algorithms supported:
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kZSTD supported: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:           Options.merge_operator: 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:        Options.compaction_filter: None
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e92cc8600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558e92cad8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:        Options.write_buffer_size: 33554432
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:  Options.max_write_buffer_number: 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:          Options.compression: NoCompression
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.num_levels: 7
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9d350523-84d7-4671-b42c-85f993c10d4b
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621072082242, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621072084130, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "5AUAU52MOW747FKSD2D8", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621072084265, "job": 1, "event": "recovery_finished"}
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558e92cdae00
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: DB pointer 0x558e92e26000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:31:12 compute-0 ceph-mon[75527]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558e92cad8d0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 01 20:31:12 compute-0 ceph-mon[75527]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@-1(???) e0 preinit fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 01 20:31:12 compute-0 ceph-mon[75527]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 01 20:31:12 compute-0 ceph-mon[75527]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : last_changed 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : created 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2025-12-01T20:31:10.206254Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).mds e1 new map
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-01T20:31:12:115619+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : fsmap 
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mkfs dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:12 compute-0 podman[75528]: 2025-12-01 20:31:12.128660493 +0000 UTC m=+0.052189828 container create 354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b (image=quay.io/ceph/ceph:v20, name=modest_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 01 20:31:12 compute-0 systemd[1]: Started libpod-conmon-354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b.scope.
Dec 01 20:31:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:12 compute-0 podman[75528]: 2025-12-01 20:31:12.108325909 +0000 UTC m=+0.031855344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa637426c9b37c2d592e52ec7d78602e132a42a10631c465b4baa90a1937be2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa637426c9b37c2d592e52ec7d78602e132a42a10631c465b4baa90a1937be2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa637426c9b37c2d592e52ec7d78602e132a42a10631c465b4baa90a1937be2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 podman[75528]: 2025-12-01 20:31:12.228828414 +0000 UTC m=+0.152357799 container init 354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b (image=quay.io/ceph/ceph:v20, name=modest_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:12 compute-0 podman[75528]: 2025-12-01 20:31:12.235502252 +0000 UTC m=+0.159031587 container start 354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b (image=quay.io/ceph/ceph:v20, name=modest_bhaskara, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:12 compute-0 podman[75528]: 2025-12-01 20:31:12.239542948 +0000 UTC m=+0.163072293 container attach 354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b (image=quay.io/ceph/ceph:v20, name=modest_bhaskara, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1181850067' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:   cluster:
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     id:     dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     health: HEALTH_OK
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:  
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:   services:
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     mon: 1 daemons, quorum compute-0 (age 0.296659s) [leader: compute-0]
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     mgr: no daemons active
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     osd: 0 osds: 0 up, 0 in
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:  
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:   data:
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     pools:   0 pools, 0 pgs
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     objects: 0 objects, 0 B
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     usage:   0 B used, 0 B / 0 B avail
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:     pgs:     
Dec 01 20:31:12 compute-0 modest_bhaskara[75583]:  
Dec 01 20:31:12 compute-0 systemd[1]: libpod-354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b.scope: Deactivated successfully.
Dec 01 20:31:12 compute-0 podman[75528]: 2025-12-01 20:31:12.429148215 +0000 UTC m=+0.352677550 container died 354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b (image=quay.io/ceph/ceph:v20, name=modest_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:31:12 compute-0 podman[75528]: 2025-12-01 20:31:12.464616891 +0000 UTC m=+0.388146226 container remove 354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b (image=quay.io/ceph/ceph:v20, name=modest_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 01 20:31:12 compute-0 systemd[1]: libpod-conmon-354592c267edd877a289e9d54a6f2f8f0580c1e686e7de9c59b09faa77b96a6b.scope: Deactivated successfully.
Dec 01 20:31:12 compute-0 podman[75621]: 2025-12-01 20:31:12.575252998 +0000 UTC m=+0.081946585 container create 650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d (image=quay.io/ceph/ceph:v20, name=tender_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:31:12 compute-0 podman[75621]: 2025-12-01 20:31:12.526648994 +0000 UTC m=+0.033342631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:12 compute-0 systemd[1]: Started libpod-conmon-650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d.scope.
Dec 01 20:31:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458b20a2ba4c766e70e97341d402e97a8743eb26eca5b5ef5672e669d64a1d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458b20a2ba4c766e70e97341d402e97a8743eb26eca5b5ef5672e669d64a1d13/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458b20a2ba4c766e70e97341d402e97a8743eb26eca5b5ef5672e669d64a1d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458b20a2ba4c766e70e97341d402e97a8743eb26eca5b5ef5672e669d64a1d13/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:12 compute-0 podman[75621]: 2025-12-01 20:31:12.704492705 +0000 UTC m=+0.211186362 container init 650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d (image=quay.io/ceph/ceph:v20, name=tender_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:12 compute-0 podman[75621]: 2025-12-01 20:31:12.718880443 +0000 UTC m=+0.225573990 container start 650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d (image=quay.io/ceph/ceph:v20, name=tender_wiles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:31:12 compute-0 podman[75621]: 2025-12-01 20:31:12.72264013 +0000 UTC m=+0.229333767 container attach 650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d (image=quay.io/ceph/ceph:v20, name=tender_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/21736640' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 01 20:31:12 compute-0 ceph-mon[75527]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/21736640' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 01 20:31:12 compute-0 tender_wiles[75637]: 
Dec 01 20:31:12 compute-0 tender_wiles[75637]: [global]
Dec 01 20:31:12 compute-0 tender_wiles[75637]:         fsid = dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:12 compute-0 tender_wiles[75637]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 01 20:31:12 compute-0 tender_wiles[75637]:         osd_crush_chooseleaf_type = 0
Dec 01 20:31:12 compute-0 systemd[1]: libpod-650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d.scope: Deactivated successfully.
Dec 01 20:31:12 compute-0 podman[75621]: 2025-12-01 20:31:12.968831722 +0000 UTC m=+0.475525289 container died 650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d (image=quay.io/ceph/ceph:v20, name=tender_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-458b20a2ba4c766e70e97341d402e97a8743eb26eca5b5ef5672e669d64a1d13-merged.mount: Deactivated successfully.
Dec 01 20:31:13 compute-0 podman[75621]: 2025-12-01 20:31:13.003643826 +0000 UTC m=+0.510337383 container remove 650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d (image=quay.io/ceph/ceph:v20, name=tender_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:13 compute-0 systemd[1]: libpod-conmon-650c570c2ee591262dd86029f571f6f6e3b84c64bba6fb4dc4f76a6ad4a5c31d.scope: Deactivated successfully.
Dec 01 20:31:13 compute-0 podman[75673]: 2025-12-01 20:31:13.055388249 +0000 UTC m=+0.035833809 container create c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c (image=quay.io/ceph/ceph:v20, name=recursing_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:13 compute-0 systemd[1]: Started libpod-conmon-c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c.scope.
Dec 01 20:31:13 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac6006fe4e4b9b4d8510cc179b57e339257c11090ce75b26b7eb69c3847c8ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac6006fe4e4b9b4d8510cc179b57e339257c11090ce75b26b7eb69c3847c8ab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac6006fe4e4b9b4d8510cc179b57e339257c11090ce75b26b7eb69c3847c8ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac6006fe4e4b9b4d8510cc179b57e339257c11090ce75b26b7eb69c3847c8ab/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:13 compute-0 podman[75673]: 2025-12-01 20:31:13.125984418 +0000 UTC m=+0.106429978 container init c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c (image=quay.io/ceph/ceph:v20, name=recursing_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 20:31:13 compute-0 podman[75673]: 2025-12-01 20:31:13.13054412 +0000 UTC m=+0.110989660 container start c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c (image=quay.io/ceph/ceph:v20, name=recursing_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 01 20:31:13 compute-0 podman[75673]: 2025-12-01 20:31:13.134609857 +0000 UTC m=+0.115055427 container attach c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c (image=quay.io/ceph/ceph:v20, name=recursing_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:13 compute-0 podman[75673]: 2025-12-01 20:31:13.039477182 +0000 UTC m=+0.019922742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:13 compute-0 ceph-mon[75527]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 01 20:31:13 compute-0 ceph-mon[75527]: monmap epoch 1
Dec 01 20:31:13 compute-0 ceph-mon[75527]: fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:13 compute-0 ceph-mon[75527]: last_changed 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:13 compute-0 ceph-mon[75527]: created 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:13 compute-0 ceph-mon[75527]: min_mon_release 20 (tentacle)
Dec 01 20:31:13 compute-0 ceph-mon[75527]: election_strategy: 1
Dec 01 20:31:13 compute-0 ceph-mon[75527]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 01 20:31:13 compute-0 ceph-mon[75527]: fsmap 
Dec 01 20:31:13 compute-0 ceph-mon[75527]: osdmap e1: 0 total, 0 up, 0 in
Dec 01 20:31:13 compute-0 ceph-mon[75527]: mgrmap e1: no daemons active
Dec 01 20:31:13 compute-0 ceph-mon[75527]: from='client.? 192.168.122.100:0/1181850067' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:31:13 compute-0 ceph-mon[75527]: from='client.? 192.168.122.100:0/21736640' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 01 20:31:13 compute-0 ceph-mon[75527]: from='client.? 192.168.122.100:0/21736640' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 01 20:31:13 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:13 compute-0 ceph-mon[75527]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410511037' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:13 compute-0 systemd[1]: libpod-c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c.scope: Deactivated successfully.
Dec 01 20:31:13 compute-0 podman[75673]: 2025-12-01 20:31:13.338819409 +0000 UTC m=+0.319264949 container died c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c (image=quay.io/ceph/ceph:v20, name=recursing_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ac6006fe4e4b9b4d8510cc179b57e339257c11090ce75b26b7eb69c3847c8ab-merged.mount: Deactivated successfully.
Dec 01 20:31:13 compute-0 podman[75673]: 2025-12-01 20:31:13.378696222 +0000 UTC m=+0.359141752 container remove c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c (image=quay.io/ceph/ceph:v20, name=recursing_ptolemy, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:13 compute-0 systemd[1]: libpod-conmon-c7814fda8f02f34478050d76fc55f3e27f88d4ff80094f9df3509e7910fba15c.scope: Deactivated successfully.
Dec 01 20:31:13 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:31:13 compute-0 ceph-mon[75527]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 01 20:31:13 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 01 20:31:13 compute-0 ceph-mon[75527]: mon.compute-0@0(leader) e1 shutdown
Dec 01 20:31:13 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0[75523]: 2025-12-01T20:31:13.588+0000 7f75acfa5640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 01 20:31:13 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0[75523]: 2025-12-01T20:31:13.588+0000 7f75acfa5640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 01 20:31:13 compute-0 ceph-mon[75527]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 01 20:31:13 compute-0 ceph-mon[75527]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 01 20:31:13 compute-0 podman[75759]: 2025-12-01 20:31:13.732554118 +0000 UTC m=+0.178568495 container died c0db23a316b60d9b1f84bc59850ae38fd5559aefcf15970ec1e9b5cdd5bf727b (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 01 20:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4369263d155a112d7e1df95e2242a66b41d4dce8c0ef50dcd3b740201824f437-merged.mount: Deactivated successfully.
Dec 01 20:31:13 compute-0 podman[75759]: 2025-12-01 20:31:13.785935672 +0000 UTC m=+0.231950059 container remove c0db23a316b60d9b1f84bc59850ae38fd5559aefcf15970ec1e9b5cdd5bf727b (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:13 compute-0 bash[75759]: ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0
Dec 01 20:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 01 20:31:13 compute-0 systemd[1]: ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@mon.compute-0.service: Deactivated successfully.
Dec 01 20:31:13 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:13 compute-0 systemd[1]: Starting Ceph mon.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:31:14 compute-0 podman[75860]: 2025-12-01 20:31:14.18033186 +0000 UTC m=+0.048701389 container create 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8442a04428748ba48bd8587c4127e20f9a7a0585a0aa1f58abd5d39a264663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8442a04428748ba48bd8587c4127e20f9a7a0585a0aa1f58abd5d39a264663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8442a04428748ba48bd8587c4127e20f9a7a0585a0aa1f58abd5d39a264663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8442a04428748ba48bd8587c4127e20f9a7a0585a0aa1f58abd5d39a264663/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 podman[75860]: 2025-12-01 20:31:14.160684088 +0000 UTC m=+0.029053597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:14 compute-0 podman[75860]: 2025-12-01 20:31:14.257484814 +0000 UTC m=+0.125854343 container init 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 01 20:31:14 compute-0 podman[75860]: 2025-12-01 20:31:14.272611595 +0000 UTC m=+0.140981094 container start 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 01 20:31:14 compute-0 bash[75860]: 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da
Dec 01 20:31:14 compute-0 systemd[1]: Started Ceph mon.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:14 compute-0 ceph-mon[75880]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: pidfile_write: ignore empty --pid-file
Dec 01 20:31:14 compute-0 ceph-mon[75880]: load: jerasure load: lrc 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Git sha 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: DB SUMMARY
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: DB Session ID:  KWBZEPI2BH59SPE6E2I4
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60223 ; 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                                     Options.env: 0x55a3cdeb9440
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                                Options.info_log: 0x55a3cf1e5e80
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                                 Options.wal_dir: 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                    Options.write_buffer_manager: 0x55a3cf230140
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                               Options.row_cache: None
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                              Options.wal_filter: None
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.wal_compression: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.max_background_jobs: 2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.max_total_wal_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:       Options.compaction_readahead_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Compression algorithms supported:
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kZSTD supported: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:           Options.merge_operator: 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:        Options.compaction_filter: None
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a3cf23ca00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a3cf2218d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:        Options.write_buffer_size: 33554432
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:  Options.max_write_buffer_number: 2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:          Options.compression: NoCompression
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.num_levels: 7
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9d350523-84d7-4671-b42c-85f993c10d4b
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621074337309, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621074341490, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58422, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55774, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621074, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621074341603, "job": 1, "event": "recovery_finished"}
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a3cf24ee00
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: DB pointer 0x55a3cf398000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:31:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.44 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.44 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.68 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.68 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a3cf2218d0#2 capacity: 512.00 MB usage: 6.38 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(3,5.53 KB,0.001055%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 01 20:31:14 compute-0 ceph-mon[75880]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???) e1 preinit fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???).mds e1 new map
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-01T20:31:12:115619+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 01 20:31:14 compute-0 ceph-mon[75880]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : last_changed 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : created 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : fsmap 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 01 20:31:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 01 20:31:14 compute-0 podman[75881]: 2025-12-01 20:31:14.374727917 +0000 UTC m=+0.058580696 container create b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182 (image=quay.io/ceph/ceph:v20, name=sharp_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:31:14 compute-0 systemd[1]: Started libpod-conmon-b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182.scope.
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: monmap epoch 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:14 compute-0 ceph-mon[75880]: last_changed 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: created 2025-12-01T20:31:09.927398+0000
Dec 01 20:31:14 compute-0 ceph-mon[75880]: min_mon_release 20 (tentacle)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: election_strategy: 1
Dec 01 20:31:14 compute-0 ceph-mon[75880]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 01 20:31:14 compute-0 ceph-mon[75880]: fsmap 
Dec 01 20:31:14 compute-0 ceph-mon[75880]: osdmap e1: 0 total, 0 up, 0 in
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mgrmap e1: no daemons active
Dec 01 20:31:14 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec82aea77ba52a01fab6f743248df29c086fa2c03a437c570571f50bf92c56eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec82aea77ba52a01fab6f743248df29c086fa2c03a437c570571f50bf92c56eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec82aea77ba52a01fab6f743248df29c086fa2c03a437c570571f50bf92c56eb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 podman[75881]: 2025-12-01 20:31:14.359426741 +0000 UTC m=+0.043279540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:14 compute-0 podman[75881]: 2025-12-01 20:31:14.455837624 +0000 UTC m=+0.139690423 container init b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182 (image=quay.io/ceph/ceph:v20, name=sharp_williamson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:14 compute-0 podman[75881]: 2025-12-01 20:31:14.469378387 +0000 UTC m=+0.153231206 container start b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182 (image=quay.io/ceph/ceph:v20, name=sharp_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 20:31:14 compute-0 podman[75881]: 2025-12-01 20:31:14.473206625 +0000 UTC m=+0.157059464 container attach b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182 (image=quay.io/ceph/ceph:v20, name=sharp_williamson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:31:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 01 20:31:14 compute-0 systemd[1]: libpod-b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182.scope: Deactivated successfully.
Dec 01 20:31:14 compute-0 podman[75881]: 2025-12-01 20:31:14.712467411 +0000 UTC m=+0.396320200 container died b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182 (image=quay.io/ceph/ceph:v20, name=sharp_williamson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:31:14 compute-0 podman[75881]: 2025-12-01 20:31:14.757313718 +0000 UTC m=+0.441166497 container remove b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182 (image=quay.io/ceph/ceph:v20, name=sharp_williamson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 01 20:31:14 compute-0 systemd[1]: libpod-conmon-b9275d45ec4299ef3236a7e14028bb7f099594148676b05223e7a1c3fc95d182.scope: Deactivated successfully.
Dec 01 20:31:14 compute-0 podman[75974]: 2025-12-01 20:31:14.817339168 +0000 UTC m=+0.039306546 container create 6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3 (image=quay.io/ceph/ceph:v20, name=festive_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:31:14 compute-0 systemd[1]: Started libpod-conmon-6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3.scope.
Dec 01 20:31:14 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:14 compute-0 podman[75974]: 2025-12-01 20:31:14.800635348 +0000 UTC m=+0.022602736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f67791debfbe3b6b076a546ee2a4ea3e73408d2ed4fbe3e0d9fbbe78ef8eab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f67791debfbe3b6b076a546ee2a4ea3e73408d2ed4fbe3e0d9fbbe78ef8eab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f67791debfbe3b6b076a546ee2a4ea3e73408d2ed4fbe3e0d9fbbe78ef8eab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:14 compute-0 podman[75974]: 2025-12-01 20:31:14.925556171 +0000 UTC m=+0.147523719 container init 6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3 (image=quay.io/ceph/ceph:v20, name=festive_hermann, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:14 compute-0 podman[75974]: 2025-12-01 20:31:14.937465981 +0000 UTC m=+0.159433369 container start 6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3 (image=quay.io/ceph/ceph:v20, name=festive_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:14 compute-0 podman[75974]: 2025-12-01 20:31:14.94126696 +0000 UTC m=+0.163234378 container attach 6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3 (image=quay.io/ceph/ceph:v20, name=festive_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 01 20:31:15 compute-0 systemd[1]: libpod-6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3.scope: Deactivated successfully.
Dec 01 20:31:15 compute-0 podman[75974]: 2025-12-01 20:31:15.153713639 +0000 UTC m=+0.375681067 container died 6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3 (image=quay.io/ceph/ceph:v20, name=festive_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-55f67791debfbe3b6b076a546ee2a4ea3e73408d2ed4fbe3e0d9fbbe78ef8eab-merged.mount: Deactivated successfully.
Dec 01 20:31:15 compute-0 podman[75974]: 2025-12-01 20:31:15.207795315 +0000 UTC m=+0.429762733 container remove 6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3 (image=quay.io/ceph/ceph:v20, name=festive_hermann, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:31:15 compute-0 systemd[1]: libpod-conmon-6c1226a48322fd3f1bde488be5d6c288bc22244afdeb321bcfbbb1bfe5641df3.scope: Deactivated successfully.
Dec 01 20:31:15 compute-0 systemd[1]: Reloading.
Dec 01 20:31:15 compute-0 systemd-rc-local-generator[76059]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:15 compute-0 systemd-sysv-generator[76062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:15 compute-0 systemd[1]: Reloading.
Dec 01 20:31:15 compute-0 systemd-rc-local-generator[76098]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:15 compute-0 systemd-sysv-generator[76102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:15 compute-0 systemd[1]: Starting Ceph mgr.compute-0.xhvuzu for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:31:16 compute-0 podman[76155]: 2025-12-01 20:31:16.120456022 +0000 UTC m=+0.037169190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:16 compute-0 podman[76155]: 2025-12-01 20:31:16.258487052 +0000 UTC m=+0.175200230 container create c450ae80f1c18b987b62ed64e86efa059e345361836dcf26c588dfef69686e49 (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae0cf1f0b0cac8b8b7977353fcad80507c055d8d02c82ba3ff0e18bf7ac94b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae0cf1f0b0cac8b8b7977353fcad80507c055d8d02c82ba3ff0e18bf7ac94b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae0cf1f0b0cac8b8b7977353fcad80507c055d8d02c82ba3ff0e18bf7ac94b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ae0cf1f0b0cac8b8b7977353fcad80507c055d8d02c82ba3ff0e18bf7ac94b/merged/var/lib/ceph/mgr/ceph-compute-0.xhvuzu supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:16 compute-0 podman[76155]: 2025-12-01 20:31:16.357916961 +0000 UTC m=+0.274630119 container init c450ae80f1c18b987b62ed64e86efa059e345361836dcf26c588dfef69686e49 (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 01 20:31:16 compute-0 podman[76155]: 2025-12-01 20:31:16.366988403 +0000 UTC m=+0.283701541 container start c450ae80f1c18b987b62ed64e86efa059e345361836dcf26c588dfef69686e49 (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 01 20:31:16 compute-0 bash[76155]: c450ae80f1c18b987b62ed64e86efa059e345361836dcf26c588dfef69686e49
Dec 01 20:31:16 compute-0 systemd[1]: Started Ceph mgr.compute-0.xhvuzu for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:16 compute-0 ceph-mgr[76174]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:31:16 compute-0 ceph-mgr[76174]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 01 20:31:16 compute-0 ceph-mgr[76174]: pidfile_write: ignore empty --pid-file
Dec 01 20:31:16 compute-0 podman[76175]: 2025-12-01 20:31:16.479791667 +0000 UTC m=+0.057485721 container create 67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69 (image=quay.io/ceph/ceph:v20, name=sweet_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:16 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'alerts'
Dec 01 20:31:16 compute-0 systemd[1]: Started libpod-conmon-67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69.scope.
Dec 01 20:31:16 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d341349a8e2662dd8c43524de4b676f4b1fa4d92facd78533cfef814821d08a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d341349a8e2662dd8c43524de4b676f4b1fa4d92facd78533cfef814821d08a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d341349a8e2662dd8c43524de4b676f4b1fa4d92facd78533cfef814821d08a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:16 compute-0 podman[76175]: 2025-12-01 20:31:16.462394386 +0000 UTC m=+0.040088430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:16 compute-0 podman[76175]: 2025-12-01 20:31:16.563394463 +0000 UTC m=+0.141088497 container init 67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69 (image=quay.io/ceph/ceph:v20, name=sweet_moser, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 01 20:31:16 compute-0 podman[76175]: 2025-12-01 20:31:16.568583585 +0000 UTC m=+0.146277609 container start 67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69 (image=quay.io/ceph/ceph:v20, name=sweet_moser, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:16 compute-0 podman[76175]: 2025-12-01 20:31:16.5716452 +0000 UTC m=+0.149339224 container attach 67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69 (image=quay.io/ceph/ceph:v20, name=sweet_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:16 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'balancer'
Dec 01 20:31:16 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'cephadm'
Dec 01 20:31:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 01 20:31:16 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107620920' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:16 compute-0 sweet_moser[76211]: 
Dec 01 20:31:16 compute-0 sweet_moser[76211]: {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "health": {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "status": "HEALTH_OK",
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "checks": {},
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "mutes": []
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     },
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "election_epoch": 5,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "quorum": [
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         0
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     ],
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "quorum_names": [
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "compute-0"
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     ],
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "quorum_age": 2,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "monmap": {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "epoch": 1,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "min_mon_release_name": "tentacle",
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_mons": 1
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     },
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "osdmap": {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "epoch": 1,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_osds": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_up_osds": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "osd_up_since": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_in_osds": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "osd_in_since": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_remapped_pgs": 0
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     },
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "pgmap": {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "pgs_by_state": [],
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_pgs": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_pools": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_objects": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "data_bytes": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "bytes_used": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "bytes_avail": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "bytes_total": 0
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     },
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "fsmap": {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "epoch": 1,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "btime": "2025-12-01T20:31:12:115619+0000",
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "by_rank": [],
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "up:standby": 0
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     },
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "mgrmap": {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "available": false,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "num_standbys": 0,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "modules": [
Dec 01 20:31:16 compute-0 sweet_moser[76211]:             "iostat",
Dec 01 20:31:16 compute-0 sweet_moser[76211]:             "nfs"
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         ],
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "services": {}
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     },
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "servicemap": {
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "epoch": 1,
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "modified": "2025-12-01T20:31:12.117544+0000",
Dec 01 20:31:16 compute-0 sweet_moser[76211]:         "services": {}
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     },
Dec 01 20:31:16 compute-0 sweet_moser[76211]:     "progress_events": {}
Dec 01 20:31:16 compute-0 sweet_moser[76211]: }
Dec 01 20:31:16 compute-0 systemd[1]: libpod-67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69.scope: Deactivated successfully.
Dec 01 20:31:16 compute-0 podman[76175]: 2025-12-01 20:31:16.778906748 +0000 UTC m=+0.356600782 container died 67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69 (image=quay.io/ceph/ceph:v20, name=sweet_moser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:16 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2107620920' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d341349a8e2662dd8c43524de4b676f4b1fa4d92facd78533cfef814821d08a-merged.mount: Deactivated successfully.
Dec 01 20:31:16 compute-0 podman[76175]: 2025-12-01 20:31:16.829273708 +0000 UTC m=+0.406967732 container remove 67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69 (image=quay.io/ceph/ceph:v20, name=sweet_moser, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:16 compute-0 systemd[1]: libpod-conmon-67b7256862d4209fe305a6c88e07e3ed4dd5c381361ac15059b16e028fa44d69.scope: Deactivated successfully.
Dec 01 20:31:17 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'crash'
Dec 01 20:31:17 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'dashboard'
Dec 01 20:31:18 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'devicehealth'
Dec 01 20:31:18 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'diskprediction_local'
Dec 01 20:31:18 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 01 20:31:18 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 01 20:31:18 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]:   from numpy import show_config as show_numpy_config
Dec 01 20:31:18 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'influx'
Dec 01 20:31:18 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'insights'
Dec 01 20:31:18 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'iostat'
Dec 01 20:31:18 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'k8sevents'
Dec 01 20:31:18 compute-0 podman[76260]: 2025-12-01 20:31:18.902511026 +0000 UTC m=+0.047068687 container create 1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104 (image=quay.io/ceph/ceph:v20, name=amazing_hofstadter, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:18 compute-0 systemd[1]: Started libpod-conmon-1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104.scope.
Dec 01 20:31:18 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae89e8ed786b27c916a3ab4a5fb06bb36daecb6916135ace10a3adcc5e19f7a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae89e8ed786b27c916a3ab4a5fb06bb36daecb6916135ace10a3adcc5e19f7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae89e8ed786b27c916a3ab4a5fb06bb36daecb6916135ace10a3adcc5e19f7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:18 compute-0 podman[76260]: 2025-12-01 20:31:18.883602487 +0000 UTC m=+0.028160168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:18 compute-0 podman[76260]: 2025-12-01 20:31:18.981083055 +0000 UTC m=+0.125640736 container init 1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104 (image=quay.io/ceph/ceph:v20, name=amazing_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:18 compute-0 podman[76260]: 2025-12-01 20:31:18.992061037 +0000 UTC m=+0.136618698 container start 1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104 (image=quay.io/ceph/ceph:v20, name=amazing_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:18 compute-0 podman[76260]: 2025-12-01 20:31:18.995190864 +0000 UTC m=+0.139748585 container attach 1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104 (image=quay.io/ceph/ceph:v20, name=amazing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:19 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'localpool'
Dec 01 20:31:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 01 20:31:19 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898656864' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]: 
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]: {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "health": {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "status": "HEALTH_OK",
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "checks": {},
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "mutes": []
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     },
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "election_epoch": 5,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "quorum": [
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         0
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     ],
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "quorum_names": [
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "compute-0"
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     ],
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "quorum_age": 4,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "monmap": {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "epoch": 1,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "min_mon_release_name": "tentacle",
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_mons": 1
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     },
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "osdmap": {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "epoch": 1,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_osds": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_up_osds": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "osd_up_since": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_in_osds": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "osd_in_since": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_remapped_pgs": 0
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     },
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "pgmap": {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "pgs_by_state": [],
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_pgs": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_pools": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_objects": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "data_bytes": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "bytes_used": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "bytes_avail": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "bytes_total": 0
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     },
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "fsmap": {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "epoch": 1,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "btime": "2025-12-01T20:31:12:115619+0000",
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "by_rank": [],
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "up:standby": 0
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     },
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "mgrmap": {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "available": false,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "num_standbys": 0,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "modules": [
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:             "iostat",
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:             "nfs"
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         ],
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "services": {}
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     },
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "servicemap": {
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "epoch": 1,
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "modified": "2025-12-01T20:31:12.117544+0000",
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:         "services": {}
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     },
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]:     "progress_events": {}
Dec 01 20:31:19 compute-0 amazing_hofstadter[76275]: }
Dec 01 20:31:19 compute-0 systemd[1]: libpod-1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104.scope: Deactivated successfully.
Dec 01 20:31:19 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'mds_autoscaler'
Dec 01 20:31:19 compute-0 podman[76301]: 2025-12-01 20:31:19.207115767 +0000 UTC m=+0.022484091 container died 1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104 (image=quay.io/ceph/ceph:v20, name=amazing_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:19 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2898656864' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ae89e8ed786b27c916a3ab4a5fb06bb36daecb6916135ace10a3adcc5e19f7a-merged.mount: Deactivated successfully.
Dec 01 20:31:19 compute-0 podman[76301]: 2025-12-01 20:31:19.252383288 +0000 UTC m=+0.067751542 container remove 1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104 (image=quay.io/ceph/ceph:v20, name=amazing_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:31:19 compute-0 systemd[1]: libpod-conmon-1b5a899d00d09f8a13fc26df9186e3354b64204214f673a8ad61252b82119104.scope: Deactivated successfully.
Dec 01 20:31:19 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'mirroring'
Dec 01 20:31:19 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'nfs'
Dec 01 20:31:19 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'orchestrator'
Dec 01 20:31:19 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'osd_perf_query'
Dec 01 20:31:20 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'osd_support'
Dec 01 20:31:20 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'pg_autoscaler'
Dec 01 20:31:20 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'progress'
Dec 01 20:31:20 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'prometheus'
Dec 01 20:31:20 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'rbd_support'
Dec 01 20:31:20 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'rgw'
Dec 01 20:31:20 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'rook'
Dec 01 20:31:21 compute-0 podman[76316]: 2025-12-01 20:31:21.357739538 +0000 UTC m=+0.064984066 container create 95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1 (image=quay.io/ceph/ceph:v20, name=hardcore_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:21 compute-0 systemd[1]: Started libpod-conmon-95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1.scope.
Dec 01 20:31:21 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:21 compute-0 podman[76316]: 2025-12-01 20:31:21.329833848 +0000 UTC m=+0.037078406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338751c5057978e8475430189a3a7bda0aea87fd8f5cd060738ce6950cbcf83e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338751c5057978e8475430189a3a7bda0aea87fd8f5cd060738ce6950cbcf83e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338751c5057978e8475430189a3a7bda0aea87fd8f5cd060738ce6950cbcf83e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:21 compute-0 podman[76316]: 2025-12-01 20:31:21.438350999 +0000 UTC m=+0.145595587 container init 95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1 (image=quay.io/ceph/ceph:v20, name=hardcore_yalow, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:21 compute-0 podman[76316]: 2025-12-01 20:31:21.447893497 +0000 UTC m=+0.155138035 container start 95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1 (image=quay.io/ceph/ceph:v20, name=hardcore_yalow, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:21 compute-0 podman[76316]: 2025-12-01 20:31:21.452640224 +0000 UTC m=+0.159884822 container attach 95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1 (image=quay.io/ceph/ceph:v20, name=hardcore_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:31:21 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'selftest'
Dec 01 20:31:21 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'smb'
Dec 01 20:31:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 01 20:31:21 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385756960' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]: 
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]: {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "health": {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "status": "HEALTH_OK",
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "checks": {},
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "mutes": []
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     },
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "election_epoch": 5,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "quorum": [
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         0
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     ],
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "quorum_names": [
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "compute-0"
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     ],
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "quorum_age": 7,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "monmap": {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "epoch": 1,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "min_mon_release_name": "tentacle",
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_mons": 1
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     },
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "osdmap": {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "epoch": 1,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_osds": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_up_osds": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "osd_up_since": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_in_osds": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "osd_in_since": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_remapped_pgs": 0
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     },
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "pgmap": {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "pgs_by_state": [],
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_pgs": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_pools": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_objects": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "data_bytes": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "bytes_used": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "bytes_avail": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "bytes_total": 0
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     },
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "fsmap": {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "epoch": 1,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "btime": "2025-12-01T20:31:12:115619+0000",
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "by_rank": [],
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "up:standby": 0
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     },
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "mgrmap": {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "available": false,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "num_standbys": 0,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "modules": [
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:             "iostat",
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:             "nfs"
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         ],
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "services": {}
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     },
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "servicemap": {
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "epoch": 1,
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "modified": "2025-12-01T20:31:12.117544+0000",
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:         "services": {}
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     },
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]:     "progress_events": {}
Dec 01 20:31:21 compute-0 hardcore_yalow[76333]: }
Dec 01 20:31:21 compute-0 systemd[1]: libpod-95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1.scope: Deactivated successfully.
Dec 01 20:31:21 compute-0 podman[76316]: 2025-12-01 20:31:21.673558878 +0000 UTC m=+0.380803396 container died 95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1 (image=quay.io/ceph/ceph:v20, name=hardcore_yalow, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-338751c5057978e8475430189a3a7bda0aea87fd8f5cd060738ce6950cbcf83e-merged.mount: Deactivated successfully.
Dec 01 20:31:21 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1385756960' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:21 compute-0 podman[76316]: 2025-12-01 20:31:21.712361487 +0000 UTC m=+0.419606015 container remove 95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1 (image=quay.io/ceph/ceph:v20, name=hardcore_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:31:21 compute-0 systemd[1]: libpod-conmon-95e4c456fbf69961b77d0d9d9356e18ecf8bcc66895e1f73c4cd9650fe8fe8d1.scope: Deactivated successfully.
Dec 01 20:31:21 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'snap_schedule'
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'stats'
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'status'
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'telegraf'
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'telemetry'
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'test_orchestrator'
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'volumes'
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: ms_deliver_dispatch: unhandled message 0x55fc412ff860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xhvuzu
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr handle_mgr_map Activating!
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr handle_mgr_map I am now activating
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.xhvuzu(active, starting, since 0.0129487s)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mds metadata"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e1 all = 1
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.xhvuzu", "id": "compute-0.xhvuzu"} v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr metadata", "who": "compute-0.xhvuzu", "id": "compute-0.xhvuzu"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: balancer
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [balancer INFO root] Starting
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Manager daemon compute-0.xhvuzu is now available
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:31:22
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [balancer INFO root] No pools available
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: crash
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: devicehealth
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: iostat
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [devicehealth INFO root] Starting
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: nfs
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: orchestrator
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: pg_autoscaler
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: progress
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [progress INFO root] Loading...
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [progress INFO root] No stored events to load
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [progress INFO root] Loaded [] historic events
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [progress INFO root] Loaded OSDMap, ready.
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] recovery thread starting
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] starting setup
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: rbd_support
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: status
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: Activating manager daemon compute-0.xhvuzu
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mgrmap e2: compute-0.xhvuzu(active, starting, since 0.0129487s)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mds metadata"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr metadata", "who": "compute-0.xhvuzu", "id": "compute-0.xhvuzu"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: Manager daemon compute-0.xhvuzu is now available
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: telemetry
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/mirror_snapshot_schedule"} v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/mirror_snapshot_schedule"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] PerfHandler: starting
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TaskHandler: starting
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/trash_purge_schedule"} v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/trash_purge_schedule"} : dispatch
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: [rbd_support INFO root] setup complete
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 01 20:31:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:22 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: volumes
Dec 01 20:31:23 compute-0 podman[76449]: 2025-12-01 20:31:23.786746181 +0000 UTC m=+0.043708193 container create 11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711 (image=quay.io/ceph/ceph:v20, name=jovial_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:31:23 compute-0 systemd[1]: Started libpod-conmon-11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711.scope.
Dec 01 20:31:23 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eceaf680aee3b702fdf39b961e30838eb62c8ad3050aa184adeb6cf7d905cef9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eceaf680aee3b702fdf39b961e30838eb62c8ad3050aa184adeb6cf7d905cef9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eceaf680aee3b702fdf39b961e30838eb62c8ad3050aa184adeb6cf7d905cef9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:23 compute-0 podman[76449]: 2025-12-01 20:31:23.84134733 +0000 UTC m=+0.098309332 container init 11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711 (image=quay.io/ceph/ceph:v20, name=jovial_swirles, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 20:31:23 compute-0 podman[76449]: 2025-12-01 20:31:23.845912308 +0000 UTC m=+0.102874310 container start 11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711 (image=quay.io/ceph/ceph:v20, name=jovial_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:23 compute-0 podman[76449]: 2025-12-01 20:31:23.849361943 +0000 UTC m=+0.106323945 container attach 11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711 (image=quay.io/ceph/ceph:v20, name=jovial_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:23 compute-0 podman[76449]: 2025-12-01 20:31:23.771301428 +0000 UTC m=+0.028263450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:23 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.xhvuzu(active, since 1.02096s)
Dec 01 20:31:23 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/mirror_snapshot_schedule"} : dispatch
Dec 01 20:31:23 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:23 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/trash_purge_schedule"} : dispatch
Dec 01 20:31:23 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:23 compute-0 ceph-mon[75880]: from='mgr.14102 192.168.122.100:0/3615239697' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:23 compute-0 ceph-mon[75880]: mgrmap e3: compute-0.xhvuzu(active, since 1.02096s)
Dec 01 20:31:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 01 20:31:24 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373256402' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:24 compute-0 jovial_swirles[76465]: 
Dec 01 20:31:24 compute-0 jovial_swirles[76465]: {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "health": {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "status": "HEALTH_OK",
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "checks": {},
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "mutes": []
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     },
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "election_epoch": 5,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "quorum": [
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         0
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     ],
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "quorum_names": [
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "compute-0"
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     ],
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "quorum_age": 9,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "monmap": {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "epoch": 1,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "min_mon_release_name": "tentacle",
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_mons": 1
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     },
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "osdmap": {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "epoch": 1,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_osds": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_up_osds": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "osd_up_since": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_in_osds": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "osd_in_since": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_remapped_pgs": 0
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     },
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "pgmap": {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "pgs_by_state": [],
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_pgs": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_pools": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_objects": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "data_bytes": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "bytes_used": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "bytes_avail": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "bytes_total": 0
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     },
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "fsmap": {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "epoch": 1,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "btime": "2025-12-01T20:31:12:115619+0000",
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "by_rank": [],
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "up:standby": 0
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     },
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "mgrmap": {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "available": true,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "num_standbys": 0,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "modules": [
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:             "iostat",
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:             "nfs"
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         ],
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "services": {}
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     },
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "servicemap": {
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "epoch": 1,
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "modified": "2025-12-01T20:31:12.117544+0000",
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:         "services": {}
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     },
Dec 01 20:31:24 compute-0 jovial_swirles[76465]:     "progress_events": {}
Dec 01 20:31:24 compute-0 jovial_swirles[76465]: }
Dec 01 20:31:24 compute-0 systemd[1]: libpod-11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711.scope: Deactivated successfully.
Dec 01 20:31:24 compute-0 podman[76449]: 2025-12-01 20:31:24.365380176 +0000 UTC m=+0.622342168 container died 11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711 (image=quay.io/ceph/ceph:v20, name=jovial_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-eceaf680aee3b702fdf39b961e30838eb62c8ad3050aa184adeb6cf7d905cef9-merged.mount: Deactivated successfully.
Dec 01 20:31:24 compute-0 podman[76449]: 2025-12-01 20:31:24.406517693 +0000 UTC m=+0.663479705 container remove 11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711 (image=quay.io/ceph/ceph:v20, name=jovial_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:24 compute-0 systemd[1]: libpod-conmon-11a238f6e663644301892bcffde89306a85db4136857f88c18cb099b85dad711.scope: Deactivated successfully.
Dec 01 20:31:24 compute-0 podman[76505]: 2025-12-01 20:31:24.477751543 +0000 UTC m=+0.048801662 container create 95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079 (image=quay.io/ceph/ceph:v20, name=zen_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:24 compute-0 systemd[1]: Started libpod-conmon-95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079.scope.
Dec 01 20:31:24 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c302481d9a7e16e77128aa4b498a351e8fa441bce29bfbce3c04a39ce31d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c302481d9a7e16e77128aa4b498a351e8fa441bce29bfbce3c04a39ce31d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c302481d9a7e16e77128aa4b498a351e8fa441bce29bfbce3c04a39ce31d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c302481d9a7e16e77128aa4b498a351e8fa441bce29bfbce3c04a39ce31d0/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:24 compute-0 podman[76505]: 2025-12-01 20:31:24.45666922 +0000 UTC m=+0.027719359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:24 compute-0 podman[76505]: 2025-12-01 20:31:24.55176242 +0000 UTC m=+0.122812559 container init 95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079 (image=quay.io/ceph/ceph:v20, name=zen_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 20:31:24 compute-0 podman[76505]: 2025-12-01 20:31:24.557277432 +0000 UTC m=+0.128327551 container start 95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079 (image=quay.io/ceph/ceph:v20, name=zen_gould, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:31:24 compute-0 podman[76505]: 2025-12-01 20:31:24.56060481 +0000 UTC m=+0.131654959 container attach 95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079 (image=quay.io/ceph/ceph:v20, name=zen_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:31:24 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:24 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 01 20:31:24 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1130345850' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 01 20:31:24 compute-0 zen_gould[76521]: 
Dec 01 20:31:24 compute-0 zen_gould[76521]: [global]
Dec 01 20:31:24 compute-0 zen_gould[76521]:         fsid = dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:24 compute-0 zen_gould[76521]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 01 20:31:24 compute-0 zen_gould[76521]:         osd_crush_chooseleaf_type = 0
Dec 01 20:31:24 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2373256402' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:31:24 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1130345850' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 01 20:31:24 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.xhvuzu(active, since 2s)
Dec 01 20:31:24 compute-0 systemd[1]: libpod-95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079.scope: Deactivated successfully.
Dec 01 20:31:24 compute-0 conmon[76521]: conmon 95460c543f5cb59209dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079.scope/container/memory.events
Dec 01 20:31:24 compute-0 podman[76505]: 2025-12-01 20:31:24.95161748 +0000 UTC m=+0.522667609 container died 95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079 (image=quay.io/ceph/ceph:v20, name=zen_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-674c302481d9a7e16e77128aa4b498a351e8fa441bce29bfbce3c04a39ce31d0-merged.mount: Deactivated successfully.
Dec 01 20:31:24 compute-0 podman[76505]: 2025-12-01 20:31:24.994831665 +0000 UTC m=+0.565881804 container remove 95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079 (image=quay.io/ceph/ceph:v20, name=zen_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:31:25 compute-0 systemd[1]: libpod-conmon-95460c543f5cb59209dd5db298eaad3311e3831ac1eab11d510c648e96cb0079.scope: Deactivated successfully.
Dec 01 20:31:25 compute-0 podman[76560]: 2025-12-01 20:31:25.061257353 +0000 UTC m=+0.045428621 container create cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42 (image=quay.io/ceph/ceph:v20, name=beautiful_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:25 compute-0 systemd[1]: Started libpod-conmon-cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42.scope.
Dec 01 20:31:25 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc3c74e55744475326f99a7889144b72f92219ecdf1e575a1417bfa2b6dd1b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc3c74e55744475326f99a7889144b72f92219ecdf1e575a1417bfa2b6dd1b8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc3c74e55744475326f99a7889144b72f92219ecdf1e575a1417bfa2b6dd1b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:25 compute-0 podman[76560]: 2025-12-01 20:31:25.039882374 +0000 UTC m=+0.024053652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:25 compute-0 podman[76560]: 2025-12-01 20:31:25.138328623 +0000 UTC m=+0.122499901 container init cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42 (image=quay.io/ceph/ceph:v20, name=beautiful_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 01 20:31:25 compute-0 podman[76560]: 2025-12-01 20:31:25.143254702 +0000 UTC m=+0.127425960 container start cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42 (image=quay.io/ceph/ceph:v20, name=beautiful_haibt, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:31:25 compute-0 podman[76560]: 2025-12-01 20:31:25.147900784 +0000 UTC m=+0.132072132 container attach cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42 (image=quay.io/ceph/ceph:v20, name=beautiful_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:31:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 01 20:31:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/710512491' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 01 20:31:25 compute-0 ceph-mon[75880]: mgrmap e4: compute-0.xhvuzu(active, since 2s)
Dec 01 20:31:25 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/710512491' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 01 20:31:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/710512491' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 01 20:31:25 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.xhvuzu(active, since 3s)
Dec 01 20:31:25 compute-0 ceph-mgr[76174]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 01 20:31:25 compute-0 systemd[1]: libpod-cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42.scope: Deactivated successfully.
Dec 01 20:31:25 compute-0 podman[76560]: 2025-12-01 20:31:25.98798091 +0000 UTC m=+0.972152178 container died cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42 (image=quay.io/ceph/ceph:v20, name=beautiful_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-efc3c74e55744475326f99a7889144b72f92219ecdf1e575a1417bfa2b6dd1b8-merged.mount: Deactivated successfully.
Dec 01 20:31:26 compute-0 podman[76560]: 2025-12-01 20:31:26.036200908 +0000 UTC m=+1.020372176 container remove cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42 (image=quay.io/ceph/ceph:v20, name=beautiful_haibt, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:26 compute-0 systemd[1]: libpod-conmon-cdaf0b620524d1ddde4f394cc80a0e509a918421a7af3a179c2bb177a3af7b42.scope: Deactivated successfully.
Dec 01 20:31:26 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: ignoring --setuser ceph since I am not root
Dec 01 20:31:26 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: ignoring --setgroup ceph since I am not root
Dec 01 20:31:26 compute-0 ceph-mgr[76174]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 01 20:31:26 compute-0 ceph-mgr[76174]: pidfile_write: ignore empty --pid-file
Dec 01 20:31:26 compute-0 podman[76613]: 2025-12-01 20:31:26.118624431 +0000 UTC m=+0.053501328 container create 6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06 (image=quay.io/ceph/ceph:v20, name=affectionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:31:26 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'alerts'
Dec 01 20:31:26 compute-0 systemd[1]: Started libpod-conmon-6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06.scope.
Dec 01 20:31:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dec33f9789d2c898aaf1bbdf0392e2263dd0af96e550565b7f15463c59ad559/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dec33f9789d2c898aaf1bbdf0392e2263dd0af96e550565b7f15463c59ad559/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dec33f9789d2c898aaf1bbdf0392e2263dd0af96e550565b7f15463c59ad559/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:26 compute-0 podman[76613]: 2025-12-01 20:31:26.094598441 +0000 UTC m=+0.029475318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:26 compute-0 podman[76613]: 2025-12-01 20:31:26.199182718 +0000 UTC m=+0.134059585 container init 6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06 (image=quay.io/ceph/ceph:v20, name=affectionate_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:26 compute-0 podman[76613]: 2025-12-01 20:31:26.204534451 +0000 UTC m=+0.139411318 container start 6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06 (image=quay.io/ceph/ceph:v20, name=affectionate_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:31:26 compute-0 podman[76613]: 2025-12-01 20:31:26.207766494 +0000 UTC m=+0.142643351 container attach 6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06 (image=quay.io/ceph/ceph:v20, name=affectionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 20:31:26 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'balancer'
Dec 01 20:31:26 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'cephadm'
Dec 01 20:31:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 01 20:31:26 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499808362' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 01 20:31:26 compute-0 affectionate_meninsky[76649]: {
Dec 01 20:31:26 compute-0 affectionate_meninsky[76649]:     "epoch": 5,
Dec 01 20:31:26 compute-0 affectionate_meninsky[76649]:     "available": true,
Dec 01 20:31:26 compute-0 affectionate_meninsky[76649]:     "active_name": "compute-0.xhvuzu",
Dec 01 20:31:26 compute-0 affectionate_meninsky[76649]:     "num_standby": 0
Dec 01 20:31:26 compute-0 affectionate_meninsky[76649]: }
Dec 01 20:31:26 compute-0 systemd[1]: libpod-6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06.scope: Deactivated successfully.
Dec 01 20:31:26 compute-0 podman[76613]: 2025-12-01 20:31:26.715184849 +0000 UTC m=+0.650061706 container died 6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06 (image=quay.io/ceph/ceph:v20, name=affectionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dec33f9789d2c898aaf1bbdf0392e2263dd0af96e550565b7f15463c59ad559-merged.mount: Deactivated successfully.
Dec 01 20:31:26 compute-0 podman[76613]: 2025-12-01 20:31:26.751924797 +0000 UTC m=+0.686801654 container remove 6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06 (image=quay.io/ceph/ceph:v20, name=affectionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:26 compute-0 systemd[1]: libpod-conmon-6423e967c906039ef5d598440e99cd697fa5a2755b04b813fed526ca9bd0ff06.scope: Deactivated successfully.
Dec 01 20:31:26 compute-0 podman[76698]: 2025-12-01 20:31:26.814163449 +0000 UTC m=+0.042722709 container create 9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c (image=quay.io/ceph/ceph:v20, name=fervent_ganguly, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:26 compute-0 systemd[1]: Started libpod-conmon-9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c.scope.
Dec 01 20:31:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c62b4dea1ec0b73a2e2e43d2c72fb99178fb124aaa199797870b97d56bf5856/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c62b4dea1ec0b73a2e2e43d2c72fb99178fb124aaa199797870b97d56bf5856/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c62b4dea1ec0b73a2e2e43d2c72fb99178fb124aaa199797870b97d56bf5856/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:26 compute-0 podman[76698]: 2025-12-01 20:31:26.795607629 +0000 UTC m=+0.024166889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:26 compute-0 podman[76698]: 2025-12-01 20:31:26.896052191 +0000 UTC m=+0.124611471 container init 9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c (image=quay.io/ceph/ceph:v20, name=fervent_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:26 compute-0 podman[76698]: 2025-12-01 20:31:26.900721235 +0000 UTC m=+0.129280485 container start 9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c (image=quay.io/ceph/ceph:v20, name=fervent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:31:26 compute-0 podman[76698]: 2025-12-01 20:31:26.903536375 +0000 UTC m=+0.132095665 container attach 9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c (image=quay.io/ceph/ceph:v20, name=fervent_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 01 20:31:26 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/710512491' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 01 20:31:26 compute-0 ceph-mon[75880]: mgrmap e5: compute-0.xhvuzu(active, since 3s)
Dec 01 20:31:26 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3499808362' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 01 20:31:27 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'crash'
Dec 01 20:31:27 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'dashboard'
Dec 01 20:31:27 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'devicehealth'
Dec 01 20:31:27 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'diskprediction_local'
Dec 01 20:31:28 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 01 20:31:28 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 01 20:31:28 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]:   from numpy import show_config as show_numpy_config
Dec 01 20:31:28 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'influx'
Dec 01 20:31:28 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'insights'
Dec 01 20:31:28 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'iostat'
Dec 01 20:31:28 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'k8sevents'
Dec 01 20:31:28 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'localpool'
Dec 01 20:31:28 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'mds_autoscaler'
Dec 01 20:31:28 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'mirroring'
Dec 01 20:31:29 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'nfs'
Dec 01 20:31:29 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'orchestrator'
Dec 01 20:31:29 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'osd_perf_query'
Dec 01 20:31:29 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'osd_support'
Dec 01 20:31:29 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'pg_autoscaler'
Dec 01 20:31:29 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'progress'
Dec 01 20:31:29 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'prometheus'
Dec 01 20:31:30 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'rbd_support'
Dec 01 20:31:30 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'rgw'
Dec 01 20:31:30 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'rook'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'selftest'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'smb'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'snap_schedule'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'stats'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'status'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'telegraf'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'telemetry'
Dec 01 20:31:31 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'test_orchestrator'
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: mgr[py] Loading python module 'volumes'
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Active manager daemon compute-0.xhvuzu restarted
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.xhvuzu
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: ms_deliver_dispatch: unhandled message 0x559f8b25e000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.xhvuzu(active, starting, since 0.0103706s)
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: mgr handle_mgr_map Activating!
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: mgr handle_mgr_map I am now activating
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.xhvuzu", "id": "compute-0.xhvuzu"} v 0)
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr metadata", "who": "compute-0.xhvuzu", "id": "compute-0.xhvuzu"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mds metadata"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e1 all = 1
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: balancer
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:32 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Manager daemon compute-0.xhvuzu is now available
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Starting
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:31:32
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:31:32 compute-0 ceph-mgr[76174]: [balancer INFO root] No pools available
Dec 01 20:31:32 compute-0 ceph-mon[75880]: Active manager daemon compute-0.xhvuzu restarted
Dec 01 20:31:32 compute-0 ceph-mon[75880]: Activating manager daemon compute-0.xhvuzu
Dec 01 20:31:32 compute-0 ceph-mon[75880]: osdmap e2: 0 total, 0 up, 0 in
Dec 01 20:31:32 compute-0 ceph-mon[75880]: mgrmap e6: compute-0.xhvuzu(active, starting, since 0.0103706s)
Dec 01 20:31:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr metadata", "who": "compute-0.xhvuzu", "id": "compute-0.xhvuzu"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mds metadata"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata"} : dispatch
Dec 01 20:31:32 compute-0 ceph-mon[75880]: Manager daemon compute-0.xhvuzu is now available
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: cephadm
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: crash
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: devicehealth
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: iostat
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: nfs
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: orchestrator
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [devicehealth INFO root] Starting
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: pg_autoscaler
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: progress
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [progress INFO root] Loading...
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [progress INFO root] No stored events to load
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [progress INFO root] Loaded [] historic events
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [progress INFO root] Loaded OSDMap, ready.
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] recovery thread starting
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] starting setup
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: rbd_support
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: status
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/mirror_snapshot_schedule"} v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/mirror_snapshot_schedule"} : dispatch
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: telemetry
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] PerfHandler: starting
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TaskHandler: starting
Dec 01 20:31:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/trash_purge_schedule"} v 0)
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/trash_purge_schedule"} : dispatch
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] setup complete
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: mgr load Constructed class from module: volumes
Dec 01 20:31:33 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.xhvuzu(active, since 1.02047s)
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 01 20:31:33 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 01 20:31:33 compute-0 fervent_ganguly[76714]: {
Dec 01 20:31:33 compute-0 fervent_ganguly[76714]:     "mgrmap_epoch": 7,
Dec 01 20:31:33 compute-0 fervent_ganguly[76714]:     "initialized": true
Dec 01 20:31:33 compute-0 fervent_ganguly[76714]: }
Dec 01 20:31:33 compute-0 systemd[1]: libpod-9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c.scope: Deactivated successfully.
Dec 01 20:31:33 compute-0 podman[76698]: 2025-12-01 20:31:33.451251752 +0000 UTC m=+6.679811012 container died 9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c (image=quay.io/ceph/ceph:v20, name=fervent_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c62b4dea1ec0b73a2e2e43d2c72fb99178fb124aaa199797870b97d56bf5856-merged.mount: Deactivated successfully.
Dec 01 20:31:33 compute-0 podman[76698]: 2025-12-01 20:31:33.489247172 +0000 UTC m=+6.717806432 container remove 9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c (image=quay.io/ceph/ceph:v20, name=fervent_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:33 compute-0 systemd[1]: libpod-conmon-9c34e54c6c410b404c03623923a1327ced044e43ff0ac70b0fdd96347132663c.scope: Deactivated successfully.
Dec 01 20:31:33 compute-0 podman[76861]: 2025-12-01 20:31:33.567879231 +0000 UTC m=+0.058830619 container create 4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5 (image=quay.io/ceph/ceph:v20, name=youthful_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:31:33 compute-0 systemd[1]: Started libpod-conmon-4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5.scope.
Dec 01 20:31:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7525338e2237e0db34a091ec6eaa190a3b158ba59efff7ad9cea52968f7ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7525338e2237e0db34a091ec6eaa190a3b158ba59efff7ad9cea52968f7ee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7525338e2237e0db34a091ec6eaa190a3b158ba59efff7ad9cea52968f7ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:33 compute-0 podman[76861]: 2025-12-01 20:31:33.544871539 +0000 UTC m=+0.035822947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:33 compute-0 podman[76861]: 2025-12-01 20:31:33.63521752 +0000 UTC m=+0.126168928 container init 4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5 (image=quay.io/ceph/ceph:v20, name=youthful_cerf, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:31:33 compute-0 podman[76861]: 2025-12-01 20:31:33.639564776 +0000 UTC m=+0.130516164 container start 4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5 (image=quay.io/ceph/ceph:v20, name=youthful_cerf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:33 compute-0 podman[76861]: 2025-12-01 20:31:33.643428205 +0000 UTC m=+0.134379623 container attach 4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5 (image=quay.io/ceph/ceph:v20, name=youthful_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:31:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec 01 20:31:34 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739952808' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: [cephadm INFO cherrypy.error] [01/Dec/2025:20:31:34] ENGINE Bus STARTING
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : [01/Dec/2025:20:31:34] ENGINE Bus STARTING
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:34 compute-0 ceph-mon[75880]: Found migration_current of "None". Setting to last migration.
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/mirror_snapshot_schedule"} : dispatch
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xhvuzu/trash_purge_schedule"} : dispatch
Dec 01 20:31:34 compute-0 ceph-mon[75880]: mgrmap e7: compute-0.xhvuzu(active, since 1.02047s)
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 01 20:31:34 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/739952808' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: [cephadm INFO cherrypy.error] [01/Dec/2025:20:31:34] ENGINE Serving on https://192.168.122.100:7150
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : [01/Dec/2025:20:31:34] ENGINE Serving on https://192.168.122.100:7150
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: [cephadm INFO cherrypy.error] [01/Dec/2025:20:31:34] ENGINE Client ('192.168.122.100', 46296) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : [01/Dec/2025:20:31:34] ENGINE Client ('192.168.122.100', 46296) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 01 20:31:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019909983 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: [cephadm INFO cherrypy.error] [01/Dec/2025:20:31:34] ENGINE Serving on http://192.168.122.100:8765
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : [01/Dec/2025:20:31:34] ENGINE Serving on http://192.168.122.100:8765
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: [cephadm INFO cherrypy.error] [01/Dec/2025:20:31:34] ENGINE Bus STARTED
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : [01/Dec/2025:20:31:34] ENGINE Bus STARTED
Dec 01 20:31:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 01 20:31:34 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:34 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:34 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739952808' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 01 20:31:34 compute-0 youthful_cerf[76878]: module 'orchestrator' is already enabled (always-on)
Dec 01 20:31:34 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.xhvuzu(active, since 2s)
Dec 01 20:31:34 compute-0 systemd[1]: libpod-4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5.scope: Deactivated successfully.
Dec 01 20:31:34 compute-0 podman[76861]: 2025-12-01 20:31:34.449355057 +0000 UTC m=+0.940306435 container died 4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5 (image=quay.io/ceph/ceph:v20, name=youthful_cerf, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bda7525338e2237e0db34a091ec6eaa190a3b158ba59efff7ad9cea52968f7ee-merged.mount: Deactivated successfully.
Dec 01 20:31:34 compute-0 podman[76861]: 2025-12-01 20:31:34.485364354 +0000 UTC m=+0.976315742 container remove 4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5 (image=quay.io/ceph/ceph:v20, name=youthful_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:31:34 compute-0 systemd[1]: libpod-conmon-4b6fc63d9c0069b8cfffe804d96e3b22243435acf61d2667a642031a3d5c2fa5.scope: Deactivated successfully.
Dec 01 20:31:34 compute-0 podman[76940]: 2025-12-01 20:31:34.544380823 +0000 UTC m=+0.039029209 container create 9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2 (image=quay.io/ceph/ceph:v20, name=admiring_euclid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 01 20:31:34 compute-0 systemd[1]: Started libpod-conmon-9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2.scope.
Dec 01 20:31:34 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a9641595f04b3e065bbd416b9f84f1752f984fcd9b5bb180a2627a84085254/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a9641595f04b3e065bbd416b9f84f1752f984fcd9b5bb180a2627a84085254/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a9641595f04b3e065bbd416b9f84f1752f984fcd9b5bb180a2627a84085254/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:34 compute-0 podman[76940]: 2025-12-01 20:31:34.620095927 +0000 UTC m=+0.114744353 container init 9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2 (image=quay.io/ceph/ceph:v20, name=admiring_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:31:34 compute-0 podman[76940]: 2025-12-01 20:31:34.527298987 +0000 UTC m=+0.021947403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:34 compute-0 podman[76940]: 2025-12-01 20:31:34.625951558 +0000 UTC m=+0.120599954 container start 9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2 (image=quay.io/ceph/ceph:v20, name=admiring_euclid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:34 compute-0 podman[76940]: 2025-12-01 20:31:34.629974585 +0000 UTC m=+0.124623011 container attach 9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2 (image=quay.io/ceph/ceph:v20, name=admiring_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 01 20:31:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 01 20:31:35 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:35 compute-0 systemd[1]: libpod-9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2.scope: Deactivated successfully.
Dec 01 20:31:35 compute-0 podman[76940]: 2025-12-01 20:31:35.109888115 +0000 UTC m=+0.604536511 container died 9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2 (image=quay.io/ceph/ceph:v20, name=admiring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5a9641595f04b3e065bbd416b9f84f1752f984fcd9b5bb180a2627a84085254-merged.mount: Deactivated successfully.
Dec 01 20:31:35 compute-0 podman[76940]: 2025-12-01 20:31:35.146475665 +0000 UTC m=+0.641124061 container remove 9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2 (image=quay.io/ceph/ceph:v20, name=admiring_euclid, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:35 compute-0 systemd[1]: libpod-conmon-9dc27f28e452dc7c397a4238c8c68c8b0e03a0d12b30a11764537690ca1818c2.scope: Deactivated successfully.
Dec 01 20:31:35 compute-0 ceph-mon[75880]: [01/Dec/2025:20:31:34] ENGINE Bus STARTING
Dec 01 20:31:35 compute-0 ceph-mon[75880]: [01/Dec/2025:20:31:34] ENGINE Serving on https://192.168.122.100:7150
Dec 01 20:31:35 compute-0 ceph-mon[75880]: [01/Dec/2025:20:31:34] ENGINE Client ('192.168.122.100', 46296) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 01 20:31:35 compute-0 ceph-mon[75880]: [01/Dec/2025:20:31:34] ENGINE Serving on http://192.168.122.100:8765
Dec 01 20:31:35 compute-0 ceph-mon[75880]: [01/Dec/2025:20:31:34] ENGINE Bus STARTED
Dec 01 20:31:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:35 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/739952808' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 01 20:31:35 compute-0 ceph-mon[75880]: mgrmap e8: compute-0.xhvuzu(active, since 2s)
Dec 01 20:31:35 compute-0 ceph-mon[75880]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:35 compute-0 podman[76995]: 2025-12-01 20:31:35.210813985 +0000 UTC m=+0.042868787 container create a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d (image=quay.io/ceph/ceph:v20, name=objective_kare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:35 compute-0 systemd[1]: Started libpod-conmon-a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d.scope.
Dec 01 20:31:35 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79747d4196973462130d833361840e85e92071a6153818b52f2186e6e4347777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79747d4196973462130d833361840e85e92071a6153818b52f2186e6e4347777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79747d4196973462130d833361840e85e92071a6153818b52f2186e6e4347777/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 podman[76995]: 2025-12-01 20:31:35.191271369 +0000 UTC m=+0.023326191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:35 compute-0 podman[76995]: 2025-12-01 20:31:35.300748553 +0000 UTC m=+0.132803365 container init a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d (image=quay.io/ceph/ceph:v20, name=objective_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:31:35 compute-0 podman[76995]: 2025-12-01 20:31:35.312828755 +0000 UTC m=+0.144883557 container start a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d (image=quay.io/ceph/ceph:v20, name=objective_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:31:35 compute-0 podman[76995]: 2025-12-01 20:31:35.315523298 +0000 UTC m=+0.147578120 container attach a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d (image=quay.io/ceph/ceph:v20, name=objective_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 01 20:31:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: [cephadm INFO root] Set ssh ssh_user
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 01 20:31:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 01 20:31:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: [cephadm INFO root] Set ssh ssh_config
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 01 20:31:35 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 01 20:31:35 compute-0 objective_kare[77011]: ssh user set to ceph-admin. sudo will be used
Dec 01 20:31:35 compute-0 systemd[1]: libpod-a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d.scope: Deactivated successfully.
Dec 01 20:31:35 compute-0 podman[76995]: 2025-12-01 20:31:35.747678356 +0000 UTC m=+0.579733188 container died a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d (image=quay.io/ceph/ceph:v20, name=objective_kare, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-79747d4196973462130d833361840e85e92071a6153818b52f2186e6e4347777-merged.mount: Deactivated successfully.
Dec 01 20:31:35 compute-0 podman[76995]: 2025-12-01 20:31:35.784987897 +0000 UTC m=+0.617042699 container remove a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d (image=quay.io/ceph/ceph:v20, name=objective_kare, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:31:35 compute-0 systemd[1]: libpod-conmon-a6fb84ffa39ad077f31c787cd3d5849e3e9f8864c4ff3bc747d9c8d74d2f949d.scope: Deactivated successfully.
Dec 01 20:31:35 compute-0 podman[77050]: 2025-12-01 20:31:35.845298818 +0000 UTC m=+0.039715988 container create cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe (image=quay.io/ceph/ceph:v20, name=sad_pascal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:31:35 compute-0 systemd[1]: Started libpod-conmon-cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe.scope.
Dec 01 20:31:35 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409b09c764a67a86bd8088b297e3b95fd9016720be204986c8cb7006c148f8ee/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409b09c764a67a86bd8088b297e3b95fd9016720be204986c8cb7006c148f8ee/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409b09c764a67a86bd8088b297e3b95fd9016720be204986c8cb7006c148f8ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409b09c764a67a86bd8088b297e3b95fd9016720be204986c8cb7006c148f8ee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/409b09c764a67a86bd8088b297e3b95fd9016720be204986c8cb7006c148f8ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:35 compute-0 podman[77050]: 2025-12-01 20:31:35.825323438 +0000 UTC m=+0.019740628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:35 compute-0 podman[77050]: 2025-12-01 20:31:35.926144262 +0000 UTC m=+0.120561462 container init cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe (image=quay.io/ceph/ceph:v20, name=sad_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 01 20:31:35 compute-0 podman[77050]: 2025-12-01 20:31:35.933395723 +0000 UTC m=+0.127812933 container start cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe (image=quay.io/ceph/ceph:v20, name=sad_pascal, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:35 compute-0 podman[77050]: 2025-12-01 20:31:35.937373127 +0000 UTC m=+0.131790307 container attach cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe (image=quay.io/ceph/ceph:v20, name=sad_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:31:36 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 01 20:31:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:36 compute-0 ceph-mgr[76174]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 01 20:31:36 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 01 20:31:36 compute-0 ceph-mgr[76174]: [cephadm INFO root] Set ssh private key
Dec 01 20:31:36 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 01 20:31:36 compute-0 systemd[1]: libpod-cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe.scope: Deactivated successfully.
Dec 01 20:31:36 compute-0 podman[77050]: 2025-12-01 20:31:36.364641269 +0000 UTC m=+0.559058439 container died cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe (image=quay.io/ceph/ceph:v20, name=sad_pascal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 01 20:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-409b09c764a67a86bd8088b297e3b95fd9016720be204986c8cb7006c148f8ee-merged.mount: Deactivated successfully.
Dec 01 20:31:36 compute-0 podman[77050]: 2025-12-01 20:31:36.410926527 +0000 UTC m=+0.605343687 container remove cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe (image=quay.io/ceph/ceph:v20, name=sad_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:36 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:36 compute-0 systemd[1]: libpod-conmon-cde88002e71732bd07a67eda11bc3e57c670402b9afece0c86416a9aea76ecbe.scope: Deactivated successfully.
Dec 01 20:31:36 compute-0 podman[77106]: 2025-12-01 20:31:36.467484097 +0000 UTC m=+0.039060451 container create e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19 (image=quay.io/ceph/ceph:v20, name=frosty_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:31:36 compute-0 systemd[1]: Started libpod-conmon-e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19.scope.
Dec 01 20:31:36 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da34d58dc2d85c5b8ca6698104e5b46a39b5ee2d169f2b00a66ef08327897b2/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da34d58dc2d85c5b8ca6698104e5b46a39b5ee2d169f2b00a66ef08327897b2/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da34d58dc2d85c5b8ca6698104e5b46a39b5ee2d169f2b00a66ef08327897b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:36 compute-0 podman[77106]: 2025-12-01 20:31:36.450479885 +0000 UTC m=+0.022056259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da34d58dc2d85c5b8ca6698104e5b46a39b5ee2d169f2b00a66ef08327897b2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da34d58dc2d85c5b8ca6698104e5b46a39b5ee2d169f2b00a66ef08327897b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:36 compute-0 podman[77106]: 2025-12-01 20:31:36.557092996 +0000 UTC m=+0.128669350 container init e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19 (image=quay.io/ceph/ceph:v20, name=frosty_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:36 compute-0 podman[77106]: 2025-12-01 20:31:36.569993316 +0000 UTC m=+0.141569670 container start e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19 (image=quay.io/ceph/ceph:v20, name=frosty_meitner, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:31:36 compute-0 podman[77106]: 2025-12-01 20:31:36.574318581 +0000 UTC m=+0.145894965 container attach e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19 (image=quay.io/ceph/ceph:v20, name=frosty_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:36 compute-0 ceph-mon[75880]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:36 compute-0 ceph-mon[75880]: Set ssh ssh_user
Dec 01 20:31:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:36 compute-0 ceph-mon[75880]: Set ssh ssh_config
Dec 01 20:31:36 compute-0 ceph-mon[75880]: ssh user set to ceph-admin. sudo will be used
Dec 01 20:31:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:37 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 01 20:31:37 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:37 compute-0 ceph-mgr[76174]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 01 20:31:37 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 01 20:31:37 compute-0 systemd[1]: libpod-e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19.scope: Deactivated successfully.
Dec 01 20:31:37 compute-0 conmon[77123]: conmon e88db15b80e8514ebfdc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19.scope/container/memory.events
Dec 01 20:31:37 compute-0 podman[77106]: 2025-12-01 20:31:37.066735808 +0000 UTC m=+0.638312192 container died e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19 (image=quay.io/ceph/ceph:v20, name=frosty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1da34d58dc2d85c5b8ca6698104e5b46a39b5ee2d169f2b00a66ef08327897b2-merged.mount: Deactivated successfully.
Dec 01 20:31:37 compute-0 podman[77106]: 2025-12-01 20:31:37.108904854 +0000 UTC m=+0.680481228 container remove e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19 (image=quay.io/ceph/ceph:v20, name=frosty_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:37 compute-0 systemd[1]: libpod-conmon-e88db15b80e8514ebfdca4920020774ec77f1f9966b6f3422807d636ddbd7a19.scope: Deactivated successfully.
Dec 01 20:31:37 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:37 compute-0 podman[77162]: 2025-12-01 20:31:37.163207126 +0000 UTC m=+0.029771406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:37 compute-0 podman[77162]: 2025-12-01 20:31:37.259260929 +0000 UTC m=+0.125825119 container create 2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118 (image=quay.io/ceph/ceph:v20, name=admiring_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:37 compute-0 systemd[1]: Started libpod-conmon-2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118.scope.
Dec 01 20:31:37 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e4a3e9bf4be84054a3e733acf1a86aef91248bc65d277071ac44c38c01a008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e4a3e9bf4be84054a3e733acf1a86aef91248bc65d277071ac44c38c01a008/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e4a3e9bf4be84054a3e733acf1a86aef91248bc65d277071ac44c38c01a008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:37 compute-0 podman[77162]: 2025-12-01 20:31:37.328058852 +0000 UTC m=+0.194623042 container init 2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118 (image=quay.io/ceph/ceph:v20, name=admiring_chatterjee, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:31:37 compute-0 podman[77162]: 2025-12-01 20:31:37.338742226 +0000 UTC m=+0.205306426 container start 2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118 (image=quay.io/ceph/ceph:v20, name=admiring_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:31:37 compute-0 podman[77162]: 2025-12-01 20:31:37.341932137 +0000 UTC m=+0.208496417 container attach 2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118 (image=quay.io/ceph/ceph:v20, name=admiring_chatterjee, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:37 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:37 compute-0 admiring_chatterjee[77178]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnWty/gj06tDQ0ilH1oRMv3GyoY9gMjTLGUnQRJgmGlXlVYoCicMc1+PiUgvRk1rbtz9F9IFvqhgeSpbZDA4Wn54zVnltR8TIvu/SWjg0e0+7tIip2hqWm2M47pUavORWzZ8DHJ29+KXOUjrOYUzxr/wRbsSOCBAXcYjeNnBSQd4cpSsAwR8Tx9U6UBW5PCE8OkWoaLK3T2qc9Sq8QvKdCrDl/svozz0NIeYLpNziROHLr0Wy8bT9bdBB1LkXSOva1Tv5WjPGjnBLMc/vjr599YvcgFdfhzHXhjXqYrI/KIHw1A1ImA32Y/d6By1T+db59ecrXE1dkJsGUQVOuhr07iP2cPp5ksZQLQZosDWFZtTtXFvct2u4nz98X+F0WdKKCbAxY0JJzRnn0MsiFGQyJAFob+zSM535r4qqbLW48/sCQvwRAcB1QXRsKn0MmAXIU/iDrhiAKgW+/pdDSvXjkbhFMrqNGooypxX2Epfqbey8sYGwxG2mxzpmlJsTSBYs= zuul@controller
Dec 01 20:31:37 compute-0 systemd[1]: libpod-2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118.scope: Deactivated successfully.
Dec 01 20:31:37 compute-0 podman[77162]: 2025-12-01 20:31:37.74696197 +0000 UTC m=+0.613526190 container died 2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118 (image=quay.io/ceph/ceph:v20, name=admiring_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2e4a3e9bf4be84054a3e733acf1a86aef91248bc65d277071ac44c38c01a008-merged.mount: Deactivated successfully.
Dec 01 20:31:37 compute-0 podman[77162]: 2025-12-01 20:31:37.815964653 +0000 UTC m=+0.682528883 container remove 2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118 (image=quay.io/ceph/ceph:v20, name=admiring_chatterjee, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 20:31:37 compute-0 systemd[1]: libpod-conmon-2a5f354c2bb4d439544f296663fa0f8f2d963b9751cb997d1fd806deaa32f118.scope: Deactivated successfully.
Dec 01 20:31:37 compute-0 podman[77218]: 2025-12-01 20:31:37.88201202 +0000 UTC m=+0.043189975 container create 3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c (image=quay.io/ceph/ceph:v20, name=heuristic_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:37 compute-0 systemd[1]: Started libpod-conmon-3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c.scope.
Dec 01 20:31:37 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0addaca8811c2ffa7303af6f8a46984a46186c95fd06f529d9670f2b46d69f39/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0addaca8811c2ffa7303af6f8a46984a46186c95fd06f529d9670f2b46d69f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0addaca8811c2ffa7303af6f8a46984a46186c95fd06f529d9670f2b46d69f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:37 compute-0 podman[77218]: 2025-12-01 20:31:37.863150143 +0000 UTC m=+0.024328098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:37 compute-0 podman[77218]: 2025-12-01 20:31:37.95835994 +0000 UTC m=+0.119537915 container init 3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c (image=quay.io/ceph/ceph:v20, name=heuristic_payne, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:37 compute-0 podman[77218]: 2025-12-01 20:31:37.963487819 +0000 UTC m=+0.124665774 container start 3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c (image=quay.io/ceph/ceph:v20, name=heuristic_payne, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:37 compute-0 podman[77218]: 2025-12-01 20:31:37.966773034 +0000 UTC m=+0.127950989 container attach 3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c (image=quay.io/ceph/ceph:v20, name=heuristic_payne, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:31:38 compute-0 ceph-mon[75880]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:38 compute-0 ceph-mon[75880]: Set ssh ssh_identity_key
Dec 01 20:31:38 compute-0 ceph-mon[75880]: Set ssh private key
Dec 01 20:31:38 compute-0 ceph-mon[75880]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:38 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:38 compute-0 ceph-mon[75880]: Set ssh ssh_identity_pub
Dec 01 20:31:38 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:38 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:38 compute-0 sshd-session[77260]: Accepted publickey for ceph-admin from 192.168.122.100 port 43348 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:38 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 01 20:31:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 01 20:31:38 compute-0 systemd-logind[796]: New session 22 of user ceph-admin.
Dec 01 20:31:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 01 20:31:38 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 01 20:31:38 compute-0 systemd[77264]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:38 compute-0 systemd[77264]: Queued start job for default target Main User Target.
Dec 01 20:31:38 compute-0 systemd[77264]: Created slice User Application Slice.
Dec 01 20:31:38 compute-0 systemd[77264]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 01 20:31:38 compute-0 systemd[77264]: Started Daily Cleanup of User's Temporary Directories.
Dec 01 20:31:38 compute-0 systemd[77264]: Reached target Paths.
Dec 01 20:31:38 compute-0 systemd[77264]: Reached target Timers.
Dec 01 20:31:38 compute-0 systemd[77264]: Starting D-Bus User Message Bus Socket...
Dec 01 20:31:38 compute-0 systemd[77264]: Starting Create User's Volatile Files and Directories...
Dec 01 20:31:38 compute-0 systemd[77264]: Listening on D-Bus User Message Bus Socket.
Dec 01 20:31:38 compute-0 systemd[77264]: Finished Create User's Volatile Files and Directories.
Dec 01 20:31:38 compute-0 systemd[77264]: Reached target Sockets.
Dec 01 20:31:38 compute-0 systemd[77264]: Reached target Basic System.
Dec 01 20:31:38 compute-0 systemd[77264]: Reached target Main User Target.
Dec 01 20:31:38 compute-0 systemd[77264]: Startup finished in 116ms.
Dec 01 20:31:38 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 01 20:31:38 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Dec 01 20:31:38 compute-0 sshd-session[77260]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:38 compute-0 sshd-session[77277]: Accepted publickey for ceph-admin from 192.168.122.100 port 43352 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:38 compute-0 systemd-logind[796]: New session 24 of user ceph-admin.
Dec 01 20:31:38 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 01 20:31:38 compute-0 sshd-session[77277]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:38 compute-0 sudo[77284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:38 compute-0 sudo[77284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:38 compute-0 sudo[77284]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:39 compute-0 ceph-mon[75880]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:39 compute-0 sshd-session[77309]: Accepted publickey for ceph-admin from 192.168.122.100 port 43368 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:39 compute-0 systemd-logind[796]: New session 25 of user ceph-admin.
Dec 01 20:31:39 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 01 20:31:39 compute-0 sshd-session[77309]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:39 compute-0 sudo[77313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 01 20:31:39 compute-0 sudo[77313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:39 compute-0 sudo[77313]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:39 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052733 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:31:39 compute-0 sshd-session[77338]: Accepted publickey for ceph-admin from 192.168.122.100 port 43376 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:39 compute-0 systemd-logind[796]: New session 26 of user ceph-admin.
Dec 01 20:31:39 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 01 20:31:39 compute-0 sshd-session[77338]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:39 compute-0 sudo[77342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 01 20:31:39 compute-0 sudo[77342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:39 compute-0 sudo[77342]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:39 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 01 20:31:39 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 01 20:31:39 compute-0 sshd-session[77367]: Accepted publickey for ceph-admin from 192.168.122.100 port 43386 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:39 compute-0 systemd-logind[796]: New session 27 of user ceph-admin.
Dec 01 20:31:39 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 01 20:31:39 compute-0 sshd-session[77367]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:39 compute-0 sudo[77371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:39 compute-0 sudo[77371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:39 compute-0 sudo[77371]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:40 compute-0 sshd-session[77396]: Accepted publickey for ceph-admin from 192.168.122.100 port 43392 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:40 compute-0 systemd-logind[796]: New session 28 of user ceph-admin.
Dec 01 20:31:40 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 01 20:31:40 compute-0 sshd-session[77396]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:40 compute-0 ceph-mon[75880]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:40 compute-0 sudo[77400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:40 compute-0 sudo[77400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:40 compute-0 sudo[77400]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:40 compute-0 sshd-session[77425]: Accepted publickey for ceph-admin from 192.168.122.100 port 43398 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:40 compute-0 systemd-logind[796]: New session 29 of user ceph-admin.
Dec 01 20:31:40 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 01 20:31:40 compute-0 sshd-session[77425]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:40 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:40 compute-0 sudo[77429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 01 20:31:40 compute-0 sudo[77429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:40 compute-0 sudo[77429]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:40 compute-0 sshd-session[77454]: Accepted publickey for ceph-admin from 192.168.122.100 port 43412 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:40 compute-0 systemd-logind[796]: New session 30 of user ceph-admin.
Dec 01 20:31:40 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 01 20:31:40 compute-0 sshd-session[77454]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:40 compute-0 sudo[77458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:40 compute-0 sudo[77458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:40 compute-0 sudo[77458]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:40 compute-0 sshd-session[77483]: Accepted publickey for ceph-admin from 192.168.122.100 port 43422 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:40 compute-0 systemd-logind[796]: New session 31 of user ceph-admin.
Dec 01 20:31:40 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 01 20:31:40 compute-0 sshd-session[77483]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:41 compute-0 ceph-mon[75880]: Deploying cephadm binary to compute-0
Dec 01 20:31:41 compute-0 sudo[77487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 01 20:31:41 compute-0 sudo[77487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:41 compute-0 sudo[77487]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:41 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:41 compute-0 sshd-session[77512]: Accepted publickey for ceph-admin from 192.168.122.100 port 43424 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:41 compute-0 systemd-logind[796]: New session 32 of user ceph-admin.
Dec 01 20:31:41 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 01 20:31:41 compute-0 sshd-session[77512]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:42 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:42 compute-0 sshd-session[77539]: Accepted publickey for ceph-admin from 192.168.122.100 port 43430 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:42 compute-0 systemd-logind[796]: New session 33 of user ceph-admin.
Dec 01 20:31:42 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 01 20:31:42 compute-0 sshd-session[77539]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:42 compute-0 sudo[77543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 01 20:31:42 compute-0 sudo[77543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:42 compute-0 sudo[77543]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:43 compute-0 sshd-session[77568]: Accepted publickey for ceph-admin from 192.168.122.100 port 43446 ssh2: RSA SHA256:Kk7kPiu5i2kvqn4Pe8X1V4o9QTF466QbpLwQnPdU1iY
Dec 01 20:31:43 compute-0 systemd-logind[796]: New session 34 of user ceph-admin.
Dec 01 20:31:43 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 01 20:31:43 compute-0 sshd-session[77568]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 01 20:31:43 compute-0 sudo[77572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 01 20:31:43 compute-0 sudo[77572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:43 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:43 compute-0 sudo[77572]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 01 20:31:43 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:43 compute-0 ceph-mgr[76174]: [cephadm INFO root] Added host compute-0
Dec 01 20:31:43 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 01 20:31:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 01 20:31:43 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:43 compute-0 heuristic_payne[77234]: Added host 'compute-0' with addr '192.168.122.100'
Dec 01 20:31:43 compute-0 systemd[1]: libpod-3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c.scope: Deactivated successfully.
Dec 01 20:31:43 compute-0 podman[77218]: 2025-12-01 20:31:43.526002712 +0000 UTC m=+5.687180657 container died 3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c (image=quay.io/ceph/ceph:v20, name=heuristic_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0addaca8811c2ffa7303af6f8a46984a46186c95fd06f529d9670f2b46d69f39-merged.mount: Deactivated successfully.
Dec 01 20:31:43 compute-0 sudo[77616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:43 compute-0 sudo[77616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:43 compute-0 sudo[77616]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:43 compute-0 podman[77218]: 2025-12-01 20:31:43.58129266 +0000 UTC m=+5.742470615 container remove 3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c (image=quay.io/ceph/ceph:v20, name=heuristic_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:31:43 compute-0 systemd[1]: libpod-conmon-3e7571d3bd36690da479e582e09f1e3a4fe520f793344ef46db8b85f3105636c.scope: Deactivated successfully.
Dec 01 20:31:43 compute-0 sudo[77654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Dec 01 20:31:43 compute-0 sudo[77654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:43 compute-0 podman[77658]: 2025-12-01 20:31:43.645488962 +0000 UTC m=+0.041502079 container create 9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e (image=quay.io/ceph/ceph:v20, name=modest_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:43 compute-0 systemd[1]: Started libpod-conmon-9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e.scope.
Dec 01 20:31:43 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e324beb355acef498c177e0a78ea26a0fb039bf6017f37352acb9a794c338135/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e324beb355acef498c177e0a78ea26a0fb039bf6017f37352acb9a794c338135/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e324beb355acef498c177e0a78ea26a0fb039bf6017f37352acb9a794c338135/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:43 compute-0 podman[77658]: 2025-12-01 20:31:43.627582569 +0000 UTC m=+0.023595706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:43 compute-0 podman[77658]: 2025-12-01 20:31:43.730074837 +0000 UTC m=+0.126087954 container init 9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e (image=quay.io/ceph/ceph:v20, name=modest_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:43 compute-0 podman[77658]: 2025-12-01 20:31:43.738274181 +0000 UTC m=+0.134287288 container start 9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e (image=quay.io/ceph/ceph:v20, name=modest_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:31:43 compute-0 podman[77658]: 2025-12-01 20:31:43.742203224 +0000 UTC m=+0.138216361 container attach 9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e (image=quay.io/ceph/ceph:v20, name=modest_tharp, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:44 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:44 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 01 20:31:44 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 01 20:31:44 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 01 20:31:44 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:44 compute-0 modest_tharp[77695]: Scheduled mon update...
Dec 01 20:31:44 compute-0 systemd[1]: libpod-9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e.scope: Deactivated successfully.
Dec 01 20:31:44 compute-0 conmon[77695]: conmon 9ca390447ab5e5d203c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e.scope/container/memory.events
Dec 01 20:31:44 compute-0 podman[77658]: 2025-12-01 20:31:44.16779956 +0000 UTC m=+0.563812697 container died 9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e (image=quay.io/ceph/ceph:v20, name=modest_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e324beb355acef498c177e0a78ea26a0fb039bf6017f37352acb9a794c338135-merged.mount: Deactivated successfully.
Dec 01 20:31:44 compute-0 podman[77658]: 2025-12-01 20:31:44.216542667 +0000 UTC m=+0.612555764 container remove 9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e (image=quay.io/ceph/ceph:v20, name=modest_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Dec 01 20:31:44 compute-0 systemd[1]: libpod-conmon-9ca390447ab5e5d203c34a2489b3cced7c71e89be4ec3438296bedb3120edb6e.scope: Deactivated successfully.
Dec 01 20:31:44 compute-0 podman[77760]: 2025-12-01 20:31:44.298621871 +0000 UTC m=+0.057525646 container create 0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f (image=quay.io/ceph/ceph:v20, name=stupefied_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:44 compute-0 systemd[1]: Started libpod-conmon-0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f.scope.
Dec 01 20:31:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0a697a3443c1637ef3aac86a9d0ab8033e7433d42d4d92e29b1fde08bf23e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0a697a3443c1637ef3aac86a9d0ab8033e7433d42d4d92e29b1fde08bf23e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0a697a3443c1637ef3aac86a9d0ab8033e7433d42d4d92e29b1fde08bf23e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:44 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054704 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:31:44 compute-0 podman[77760]: 2025-12-01 20:31:44.36702912 +0000 UTC m=+0.125932925 container init 0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f (image=quay.io/ceph/ceph:v20, name=stupefied_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:44 compute-0 podman[77760]: 2025-12-01 20:31:44.274060812 +0000 UTC m=+0.032964627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:44 compute-0 podman[77760]: 2025-12-01 20:31:44.373759652 +0000 UTC m=+0.132663427 container start 0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f (image=quay.io/ceph/ceph:v20, name=stupefied_williams, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:44 compute-0 podman[77760]: 2025-12-01 20:31:44.376674476 +0000 UTC m=+0.135578261 container attach 0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f (image=quay.io/ceph/ceph:v20, name=stupefied_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:44 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:44 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:44 compute-0 ceph-mon[75880]: Added host compute-0
Dec 01 20:31:44 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:31:44 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:44 compute-0 podman[77728]: 2025-12-01 20:31:44.562435096 +0000 UTC m=+0.705623960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:44 compute-0 podman[77815]: 2025-12-01 20:31:44.669132381 +0000 UTC m=+0.044381082 container create c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d (image=quay.io/ceph/ceph:v20, name=focused_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:44 compute-0 systemd[1]: Started libpod-conmon-c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d.scope.
Dec 01 20:31:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:44 compute-0 podman[77815]: 2025-12-01 20:31:44.723130696 +0000 UTC m=+0.098379407 container init c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d (image=quay.io/ceph/ceph:v20, name=focused_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:44 compute-0 podman[77815]: 2025-12-01 20:31:44.72797697 +0000 UTC m=+0.103225671 container start c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d (image=quay.io/ceph/ceph:v20, name=focused_keller, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:44 compute-0 podman[77815]: 2025-12-01 20:31:44.730684784 +0000 UTC m=+0.105933505 container attach c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d (image=quay.io/ceph/ceph:v20, name=focused_keller, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:44 compute-0 podman[77815]: 2025-12-01 20:31:44.651822472 +0000 UTC m=+0.027071203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:44 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:44 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 01 20:31:44 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 01 20:31:44 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 01 20:31:44 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:44 compute-0 stupefied_williams[77776]: Scheduled mgr update...
Dec 01 20:31:44 compute-0 focused_keller[77832]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 01 20:31:44 compute-0 systemd[1]: libpod-c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d.scope: Deactivated successfully.
Dec 01 20:31:44 compute-0 podman[77815]: 2025-12-01 20:31:44.817870245 +0000 UTC m=+0.193118966 container died c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d (image=quay.io/ceph/ceph:v20, name=focused_keller, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:44 compute-0 systemd[1]: libpod-0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f.scope: Deactivated successfully.
Dec 01 20:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0392fbee225ad7ea4dd51663ec25da664e23a98192f6ba7b254055a7ca3b78-merged.mount: Deactivated successfully.
Dec 01 20:31:45 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:45 compute-0 podman[77815]: 2025-12-01 20:31:45.475112758 +0000 UTC m=+0.850361489 container remove c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d (image=quay.io/ceph/ceph:v20, name=focused_keller, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:45 compute-0 sudo[77654]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 01 20:31:45 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:45 compute-0 podman[77760]: 2025-12-01 20:31:45.550764638 +0000 UTC m=+1.309668403 container died 0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f (image=quay.io/ceph/ceph:v20, name=stupefied_williams, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0a697a3443c1637ef3aac86a9d0ab8033e7433d42d4d92e29b1fde08bf23e4-merged.mount: Deactivated successfully.
Dec 01 20:31:45 compute-0 podman[77840]: 2025-12-01 20:31:45.585948088 +0000 UTC m=+0.756476227 container remove 0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f (image=quay.io/ceph/ceph:v20, name=stupefied_williams, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 20:31:45 compute-0 systemd[1]: libpod-conmon-0814c38dd0a9e0bbec5687a950e12a40e683868dfdbf39ab7cbfcb0d6b6d414f.scope: Deactivated successfully.
Dec 01 20:31:45 compute-0 systemd[1]: libpod-conmon-c65250903b8fbceec4d8f8f1030644309fb71d4fe116773efb98b16fd7b5bd5d.scope: Deactivated successfully.
Dec 01 20:31:45 compute-0 sudo[77862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:45 compute-0 sudo[77862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:45 compute-0 sudo[77862]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:45 compute-0 podman[77887]: 2025-12-01 20:31:45.645637665 +0000 UTC m=+0.042426352 container create 955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4 (image=quay.io/ceph/ceph:v20, name=silly_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:45 compute-0 sudo[77900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 01 20:31:45 compute-0 sudo[77900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:45 compute-0 systemd[1]: Started libpod-conmon-955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4.scope.
Dec 01 20:31:45 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fee357e853b6aeddf9bd09eb26a7e5b0fa449988883e0d798b587e4cd2b4c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fee357e853b6aeddf9bd09eb26a7e5b0fa449988883e0d798b587e4cd2b4c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fee357e853b6aeddf9bd09eb26a7e5b0fa449988883e0d798b587e4cd2b4c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:45 compute-0 podman[77887]: 2025-12-01 20:31:45.709435624 +0000 UTC m=+0.106224331 container init 955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4 (image=quay.io/ceph/ceph:v20, name=silly_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 01 20:31:45 compute-0 podman[77887]: 2025-12-01 20:31:45.717085746 +0000 UTC m=+0.113874433 container start 955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4 (image=quay.io/ceph/ceph:v20, name=silly_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:45 compute-0 podman[77887]: 2025-12-01 20:31:45.720381763 +0000 UTC m=+0.117170460 container attach 955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4 (image=quay.io/ceph/ceph:v20, name=silly_chebyshev, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:31:45 compute-0 podman[77887]: 2025-12-01 20:31:45.626798118 +0000 UTC m=+0.023586835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:45 compute-0 ceph-mon[75880]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:45 compute-0 ceph-mon[75880]: Saving service mon spec with placement count:5
Dec 01 20:31:45 compute-0 ceph-mon[75880]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:45 compute-0 ceph-mon[75880]: Saving service mgr spec with placement count:2
Dec 01 20:31:45 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:45 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:45 compute-0 sudo[77900]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:45 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:46 compute-0 sudo[77975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:46 compute-0 sudo[77975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:46 compute-0 sudo[77975]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:46 compute-0 sudo[78000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:31:46 compute-0 sudo[78000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:46 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:46 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service crash spec with placement *
Dec 01 20:31:46 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 01 20:31:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 01 20:31:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:46 compute-0 silly_chebyshev[77930]: Scheduled crash update...
Dec 01 20:31:46 compute-0 systemd[1]: libpod-955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4.scope: Deactivated successfully.
Dec 01 20:31:46 compute-0 podman[77887]: 2025-12-01 20:31:46.156092751 +0000 UTC m=+0.552881438 container died 955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4 (image=quay.io/ceph/ceph:v20, name=silly_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 01 20:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-94fee357e853b6aeddf9bd09eb26a7e5b0fa449988883e0d798b587e4cd2b4c8-merged.mount: Deactivated successfully.
Dec 01 20:31:46 compute-0 podman[77887]: 2025-12-01 20:31:46.192884033 +0000 UTC m=+0.589672730 container remove 955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4 (image=quay.io/ceph/ceph:v20, name=silly_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:46 compute-0 systemd[1]: libpod-conmon-955ed5cc2fd401d675e86d11a10e0311e58f4f65329129887d8dfec7454773b4.scope: Deactivated successfully.
Dec 01 20:31:46 compute-0 podman[78039]: 2025-12-01 20:31:46.250814231 +0000 UTC m=+0.036363308 container create 6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39 (image=quay.io/ceph/ceph:v20, name=flamboyant_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:46 compute-0 systemd[1]: Started libpod-conmon-6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39.scope.
Dec 01 20:31:46 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b691431145a2c37585ce35e9aa5719e07e349de5a64ee8d2e08d6d128538977/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b691431145a2c37585ce35e9aa5719e07e349de5a64ee8d2e08d6d128538977/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b691431145a2c37585ce35e9aa5719e07e349de5a64ee8d2e08d6d128538977/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:46 compute-0 podman[78039]: 2025-12-01 20:31:46.324049143 +0000 UTC m=+0.109598240 container init 6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39 (image=quay.io/ceph/ceph:v20, name=flamboyant_nobel, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:46 compute-0 podman[78039]: 2025-12-01 20:31:46.328655304 +0000 UTC m=+0.114204381 container start 6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39 (image=quay.io/ceph/ceph:v20, name=flamboyant_nobel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:31:46 compute-0 podman[78039]: 2025-12-01 20:31:46.235914478 +0000 UTC m=+0.021463585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:46 compute-0 podman[78039]: 2025-12-01 20:31:46.332616818 +0000 UTC m=+0.118165895 container attach 6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39 (image=quay.io/ceph/ceph:v20, name=flamboyant_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:46 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:46 compute-0 podman[78121]: 2025-12-01 20:31:46.49597174 +0000 UTC m=+0.053035072 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:46 compute-0 podman[78121]: 2025-12-01 20:31:46.595613416 +0000 UTC m=+0.152676718 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 01 20:31:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1014330907' entity='client.admin' 
Dec 01 20:31:46 compute-0 systemd[1]: libpod-6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39.scope: Deactivated successfully.
Dec 01 20:31:46 compute-0 podman[78039]: 2025-12-01 20:31:46.739937791 +0000 UTC m=+0.525486898 container died 6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39 (image=quay.io/ceph/ceph:v20, name=flamboyant_nobel, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b691431145a2c37585ce35e9aa5719e07e349de5a64ee8d2e08d6d128538977-merged.mount: Deactivated successfully.
Dec 01 20:31:46 compute-0 podman[78039]: 2025-12-01 20:31:46.782581034 +0000 UTC m=+0.568130111 container remove 6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39 (image=quay.io/ceph/ceph:v20, name=flamboyant_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 01 20:31:46 compute-0 systemd[1]: libpod-conmon-6a91c26e533954e9556551c7b4b57f163e4dfe30833bdc920175e354395bfc39.scope: Deactivated successfully.
Dec 01 20:31:46 compute-0 podman[78215]: 2025-12-01 20:31:46.84102623 +0000 UTC m=+0.040555496 container create 60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf (image=quay.io/ceph/ceph:v20, name=adoring_murdock, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:46 compute-0 sudo[78000]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:46 compute-0 systemd[1]: Started libpod-conmon-60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf.scope.
Dec 01 20:31:46 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d8f44ef5b3f2d45f5f69abff422d9ee37b66c894e0fd0e720de6ae1256149c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d8f44ef5b3f2d45f5f69abff422d9ee37b66c894e0fd0e720de6ae1256149c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d8f44ef5b3f2d45f5f69abff422d9ee37b66c894e0fd0e720de6ae1256149c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:46 compute-0 podman[78215]: 2025-12-01 20:31:46.824720428 +0000 UTC m=+0.024249714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:46 compute-0 sudo[78231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:46 compute-0 podman[78215]: 2025-12-01 20:31:46.92040048 +0000 UTC m=+0.119929776 container init 60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf (image=quay.io/ceph/ceph:v20, name=adoring_murdock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 01 20:31:46 compute-0 sudo[78231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:46 compute-0 podman[78215]: 2025-12-01 20:31:46.926638423 +0000 UTC m=+0.126167689 container start 60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf (image=quay.io/ceph/ceph:v20, name=adoring_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:31:46 compute-0 sudo[78231]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:46 compute-0 podman[78215]: 2025-12-01 20:31:46.931327798 +0000 UTC m=+0.130857154 container attach 60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf (image=quay.io/ceph/ceph:v20, name=adoring_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:31:46 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:46 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:46 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1014330907' entity='client.admin' 
Dec 01 20:31:46 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:46 compute-0 sudo[78261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:31:46 compute-0 sudo[78261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:47 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78317 (sysctl)
Dec 01 20:31:47 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 01 20:31:47 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:47 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 01 20:31:47 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 01 20:31:47 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:47 compute-0 systemd[1]: libpod-60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf.scope: Deactivated successfully.
Dec 01 20:31:47 compute-0 podman[78215]: 2025-12-01 20:31:47.39148307 +0000 UTC m=+0.591012326 container died 60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf (image=quay.io/ceph/ceph:v20, name=adoring_murdock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 01 20:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-22d8f44ef5b3f2d45f5f69abff422d9ee37b66c894e0fd0e720de6ae1256149c-merged.mount: Deactivated successfully.
Dec 01 20:31:47 compute-0 podman[78215]: 2025-12-01 20:31:47.433043811 +0000 UTC m=+0.632573077 container remove 60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf (image=quay.io/ceph/ceph:v20, name=adoring_murdock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:31:47 compute-0 systemd[1]: libpod-conmon-60fe7522d4b81db2a80b7a88175056c400422a61521e17acd25485aaef8ebcdf.scope: Deactivated successfully.
Dec 01 20:31:47 compute-0 podman[78341]: 2025-12-01 20:31:47.495570759 +0000 UTC m=+0.040476131 container create aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5 (image=quay.io/ceph/ceph:v20, name=quizzical_haslett, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:47 compute-0 systemd[1]: Started libpod-conmon-aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5.scope.
Dec 01 20:31:47 compute-0 sudo[78261]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:47 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be806a858f0f4bdda1814529f61f25cfce2aae32819d64280808f5ed725a84e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be806a858f0f4bdda1814529f61f25cfce2aae32819d64280808f5ed725a84e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be806a858f0f4bdda1814529f61f25cfce2aae32819d64280808f5ed725a84e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:47 compute-0 podman[78341]: 2025-12-01 20:31:47.477714018 +0000 UTC m=+0.022619410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:47 compute-0 podman[78341]: 2025-12-01 20:31:47.575270537 +0000 UTC m=+0.120175929 container init aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5 (image=quay.io/ceph/ceph:v20, name=quizzical_haslett, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 01 20:31:47 compute-0 podman[78341]: 2025-12-01 20:31:47.580840962 +0000 UTC m=+0.125746334 container start aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5 (image=quay.io/ceph/ceph:v20, name=quizzical_haslett, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:47 compute-0 podman[78341]: 2025-12-01 20:31:47.583472792 +0000 UTC m=+0.128378184 container attach aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5 (image=quay.io/ceph/ceph:v20, name=quizzical_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:47 compute-0 sudo[78373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:47 compute-0 sudo[78373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:47 compute-0 sudo[78373]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:47 compute-0 sudo[78400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 01 20:31:47 compute-0 sudo[78400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:47 compute-0 sudo[78400]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:47 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:47 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 01 20:31:47 compute-0 sudo[78463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:47 compute-0 sudo[78463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:47 compute-0 sudo[78463]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:48 compute-0 ceph-mgr[76174]: [cephadm INFO root] Added label _admin to host compute-0
Dec 01 20:31:48 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 01 20:31:48 compute-0 quizzical_haslett[78370]: Added label _admin to host compute-0
Dec 01 20:31:48 compute-0 sudo[78489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- inventory --format=json-pretty --filter-for-batch
Dec 01 20:31:48 compute-0 systemd[1]: libpod-aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5.scope: Deactivated successfully.
Dec 01 20:31:48 compute-0 podman[78341]: 2025-12-01 20:31:48.058783951 +0000 UTC m=+0.603689333 container died aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5 (image=quay.io/ceph/ceph:v20, name=quizzical_haslett, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:31:48 compute-0 sudo[78489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:48 compute-0 ceph-mon[75880]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:48 compute-0 ceph-mon[75880]: Saving service crash spec with placement *
Dec 01 20:31:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be806a858f0f4bdda1814529f61f25cfce2aae32819d64280808f5ed725a84e-merged.mount: Deactivated successfully.
Dec 01 20:31:48 compute-0 podman[78341]: 2025-12-01 20:31:48.183102763 +0000 UTC m=+0.728008135 container remove aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5 (image=quay.io/ceph/ceph:v20, name=quizzical_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 01 20:31:48 compute-0 systemd[1]: libpod-conmon-aa4f863f39e0f4616784d5b072ff2fbd1a1ab8ffcd6fc2721bc5a94ee59fa2a5.scope: Deactivated successfully.
Dec 01 20:31:48 compute-0 podman[78527]: 2025-12-01 20:31:48.33189554 +0000 UTC m=+0.120565801 container create 622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31 (image=quay.io/ceph/ceph:v20, name=funny_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 01 20:31:48 compute-0 podman[78527]: 2025-12-01 20:31:48.23784307 +0000 UTC m=+0.026513371 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:48 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:48 compute-0 systemd[1]: Started libpod-conmon-622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31.scope.
Dec 01 20:31:48 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b257364059fd64b9e26b763ba2f1ce3ed11313907bd4efa14fe93d0926ff9fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b257364059fd64b9e26b763ba2f1ce3ed11313907bd4efa14fe93d0926ff9fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b257364059fd64b9e26b763ba2f1ce3ed11313907bd4efa14fe93d0926ff9fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:48 compute-0 podman[78527]: 2025-12-01 20:31:48.664735499 +0000 UTC m=+0.453405770 container init 622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31 (image=quay.io/ceph/ceph:v20, name=funny_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:31:48 compute-0 podman[78527]: 2025-12-01 20:31:48.673958552 +0000 UTC m=+0.462628813 container start 622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31 (image=quay.io/ceph/ceph:v20, name=funny_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 01 20:31:48 compute-0 podman[78527]: 2025-12-01 20:31:48.678325888 +0000 UTC m=+0.466996149 container attach 622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31 (image=quay.io/ceph/ceph:v20, name=funny_euclid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 20:31:48 compute-0 podman[78559]: 2025-12-01 20:31:48.71139004 +0000 UTC m=+0.153813593 container create 6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:31:48 compute-0 systemd[1]: Started libpod-conmon-6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b.scope.
Dec 01 20:31:48 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:48 compute-0 podman[78559]: 2025-12-01 20:31:48.690003759 +0000 UTC m=+0.132427342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:31:48 compute-0 podman[78559]: 2025-12-01 20:31:48.998404556 +0000 UTC m=+0.440828109 container init 6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jennings, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:31:49 compute-0 podman[78559]: 2025-12-01 20:31:49.004944726 +0000 UTC m=+0.447368289 container start 6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 01 20:31:49 compute-0 musing_jennings[78578]: 167 167
Dec 01 20:31:49 compute-0 systemd[1]: libpod-6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b.scope: Deactivated successfully.
Dec 01 20:31:49 compute-0 podman[78559]: 2025-12-01 20:31:49.100930767 +0000 UTC m=+0.543354320 container attach 6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jennings, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:49 compute-0 podman[78559]: 2025-12-01 20:31:49.101478527 +0000 UTC m=+0.543902080 container died 6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-65f895b76efe3f8b893cb59a2049c69dddcd4cb59c69f382aea40ce8020786fb-merged.mount: Deactivated successfully.
Dec 01 20:31:49 compute-0 podman[78559]: 2025-12-01 20:31:49.143654143 +0000 UTC m=+0.586077706 container remove 6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jennings, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:49 compute-0 ceph-mon[75880]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:49 compute-0 ceph-mon[75880]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:49 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:49 compute-0 ceph-mon[75880]: Added label _admin to host compute-0
Dec 01 20:31:49 compute-0 systemd[1]: libpod-conmon-6ab5a014798163b03d0f8641d266cbc00ef5bc59d83d664cd5bffd4ed989635b.scope: Deactivated successfully.
Dec 01 20:31:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 01 20:31:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3552686607' entity='client.admin' 
Dec 01 20:31:49 compute-0 funny_euclid[78556]: set mgr/dashboard/cluster/status
Dec 01 20:31:49 compute-0 systemd[1]: libpod-622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31.scope: Deactivated successfully.
Dec 01 20:31:49 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:49 compute-0 podman[78618]: 2025-12-01 20:31:49.279426334 +0000 UTC m=+0.029065155 container died 622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31 (image=quay.io/ceph/ceph:v20, name=funny_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b257364059fd64b9e26b763ba2f1ce3ed11313907bd4efa14fe93d0926ff9fb-merged.mount: Deactivated successfully.
Dec 01 20:31:49 compute-0 podman[78618]: 2025-12-01 20:31:49.315502705 +0000 UTC m=+0.065141496 container remove 622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31 (image=quay.io/ceph/ceph:v20, name=funny_euclid, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:31:49 compute-0 systemd[1]: libpod-conmon-622e7eda2fb3a14fac2fbc621e0292729fb95e5b964bbd879a31597819e6be31.scope: Deactivated successfully.
Dec 01 20:31:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:31:49 compute-0 systemd[1]: Reloading.
Dec 01 20:31:49 compute-0 systemd-sysv-generator[78666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:49 compute-0 systemd-rc-local-generator[78661]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:49 compute-0 sudo[74825]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:49 compute-0 podman[78680]: 2025-12-01 20:31:49.754477929 +0000 UTC m=+0.038384062 container create 8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:31:49 compute-0 systemd[1]: Started libpod-conmon-8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5.scope.
Dec 01 20:31:49 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71feb265f0f615604ae65a07e59c430210039df856c0c05b04428f05d89fcd81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71feb265f0f615604ae65a07e59c430210039df856c0c05b04428f05d89fcd81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71feb265f0f615604ae65a07e59c430210039df856c0c05b04428f05d89fcd81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71feb265f0f615604ae65a07e59c430210039df856c0c05b04428f05d89fcd81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:49 compute-0 podman[78680]: 2025-12-01 20:31:49.827069255 +0000 UTC m=+0.110975428 container init 8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:49 compute-0 podman[78680]: 2025-12-01 20:31:49.737786114 +0000 UTC m=+0.021692227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:31:49 compute-0 podman[78680]: 2025-12-01 20:31:49.83581464 +0000 UTC m=+0.119720733 container start 8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 01 20:31:49 compute-0 podman[78680]: 2025-12-01 20:31:49.839243284 +0000 UTC m=+0.123149437 container attach 8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:31:49 compute-0 sudo[78724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qetssnrbbsskduobycliheylgzkvompn ; /usr/bin/python3'
Dec 01 20:31:49 compute-0 sudo[78724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:50 compute-0 python3[78726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:31:50 compute-0 podman[78729]: 2025-12-01 20:31:50.125316868 +0000 UTC m=+0.052323032 container create 335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c (image=quay.io/ceph/ceph:v20, name=amazing_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:50 compute-0 systemd[1]: Started libpod-conmon-335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c.scope.
Dec 01 20:31:50 compute-0 podman[78729]: 2025-12-01 20:31:50.098399325 +0000 UTC m=+0.025405569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:50 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6064740f82df5db285c0df7fac563053d172bc3a1e003ea4e166622300a23f72/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6064740f82df5db285c0df7fac563053d172bc3a1e003ea4e166622300a23f72/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3552686607' entity='client.admin' 
Dec 01 20:31:50 compute-0 podman[78729]: 2025-12-01 20:31:50.223030016 +0000 UTC m=+0.150036200 container init 335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c (image=quay.io/ceph/ceph:v20, name=amazing_heyrovsky, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:31:50 compute-0 podman[78729]: 2025-12-01 20:31:50.229879423 +0000 UTC m=+0.156885587 container start 335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c (image=quay.io/ceph/ceph:v20, name=amazing_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:31:50 compute-0 podman[78729]: 2025-12-01 20:31:50.233146788 +0000 UTC m=+0.160152972 container attach 335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c (image=quay.io/ceph/ceph:v20, name=amazing_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]: [
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:     {
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "available": false,
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "being_replaced": false,
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "ceph_device_lvm": false,
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "lsm_data": {},
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "lvs": [],
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "path": "/dev/sr0",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "rejected_reasons": [
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "Has a FileSystem",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "Insufficient space (<5GB)"
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         ],
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         "sys_api": {
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "actuators": null,
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "device_nodes": [
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:                 "sr0"
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             ],
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "devname": "sr0",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "human_readable_size": "482.00 KB",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "id_bus": "ata",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "model": "QEMU DVD-ROM",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "nr_requests": "2",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "parent": "/dev/sr0",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "partitions": {},
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "path": "/dev/sr0",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "removable": "1",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "rev": "2.5+",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "ro": "0",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "rotational": "1",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "sas_address": "",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "sas_device_handle": "",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "scheduler_mode": "mq-deadline",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "sectors": 0,
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "sectorsize": "2048",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "size": 493568.0,
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "support_discard": "2048",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "type": "disk",
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:             "vendor": "QEMU"
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:         }
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]:     }
Dec 01 20:31:50 compute-0 recursing_lederberg[78696]: ]
Dec 01 20:31:50 compute-0 systemd[1]: libpod-8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5.scope: Deactivated successfully.
Dec 01 20:31:50 compute-0 podman[78680]: 2025-12-01 20:31:50.323929014 +0000 UTC m=+0.607835127 container died 8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-71feb265f0f615604ae65a07e59c430210039df856c0c05b04428f05d89fcd81-merged.mount: Deactivated successfully.
Dec 01 20:31:50 compute-0 podman[78680]: 2025-12-01 20:31:50.363910935 +0000 UTC m=+0.647817028 container remove 8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:50 compute-0 systemd[1]: libpod-conmon-8f544cbf74094fadd285d2d6fc5c17c5c59d60e99bca0d01c45437d63d6b9fb5.scope: Deactivated successfully.
Dec 01 20:31:50 compute-0 sudo[78489]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:50 compute-0 ceph-mgr[76174]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:31:50 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 01 20:31:50 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 01 20:31:50 compute-0 sudo[79447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 01 20:31:50 compute-0 sudo[79447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 sudo[79447]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 sudo[79472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph
Dec 01 20:31:50 compute-0 sudo[79472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 sudo[79472]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 sudo[79497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.conf.new
Dec 01 20:31:50 compute-0 sudo[79497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 sudo[79497]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 01 20:31:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/465583724' entity='client.admin' 
Dec 01 20:31:50 compute-0 sudo[79522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:50 compute-0 sudo[79522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 systemd[1]: libpod-335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c.scope: Deactivated successfully.
Dec 01 20:31:50 compute-0 podman[78729]: 2025-12-01 20:31:50.680421271 +0000 UTC m=+0.607427435 container died 335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c (image=quay.io/ceph/ceph:v20, name=amazing_heyrovsky, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 01 20:31:50 compute-0 sudo[79522]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6064740f82df5db285c0df7fac563053d172bc3a1e003ea4e166622300a23f72-merged.mount: Deactivated successfully.
Dec 01 20:31:50 compute-0 podman[78729]: 2025-12-01 20:31:50.720275546 +0000 UTC m=+0.647281710 container remove 335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c (image=quay.io/ceph/ceph:v20, name=amazing_heyrovsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:50 compute-0 systemd[1]: libpod-conmon-335f1dc777fb2b9647b66049290a9c17a14aa6147395e316193eab7aa180288c.scope: Deactivated successfully.
Dec 01 20:31:50 compute-0 sudo[79550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.conf.new
Dec 01 20:31:50 compute-0 sudo[79550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 sudo[78724]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 sudo[79550]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 sudo[79608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.conf.new
Dec 01 20:31:50 compute-0 sudo[79608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 sudo[79608]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 sudo[79633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.conf.new
Dec 01 20:31:50 compute-0 sudo[79633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 sudo[79633]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 sudo[79658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 01 20:31:50 compute-0 sudo[79658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:50 compute-0 sudo[79658]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:50 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf
Dec 01 20:31:50 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf
Dec 01 20:31:50 compute-0 sudo[79683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config
Dec 01 20:31:50 compute-0 sudo[79683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79683]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[79708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config
Dec 01 20:31:51 compute-0 sudo[79708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79708]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[79756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf.new
Dec 01 20:31:51 compute-0 sudo[79756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79756]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[79810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:51 compute-0 sudo[79810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79810]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[79858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf.new
Dec 01 20:31:51 compute-0 sudo[79858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79858]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:51 compute-0 sudo[79906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf.new
Dec 01 20:31:51 compute-0 sudo[79906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79906]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[79931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf.new
Dec 01 20:31:51 compute-0 sudo[79931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79931]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:31:51 compute-0 ceph-mon[75880]: Updating compute-0:/etc/ceph/ceph.conf
Dec 01 20:31:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/465583724' entity='client.admin' 
Dec 01 20:31:51 compute-0 ceph-mon[75880]: Updating compute-0:/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf
Dec 01 20:31:51 compute-0 sudo[79979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf.new /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.conf
Dec 01 20:31:51 compute-0 sudo[79979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[79979]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 01 20:31:51 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 01 20:31:51 compute-0 sudo[80028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 01 20:31:51 compute-0 sudo[80028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[80028]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[80076]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvttdfpqwlfoedobhepiadojwghmljr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621111.0663016-36700-204721117536863/async_wrapper.py j537136416627 30 /home/zuul/.ansible/tmp/ansible-tmp-1764621111.0663016-36700-204721117536863/AnsiballZ_command.py _'
Dec 01 20:31:51 compute-0 sudo[80076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:51 compute-0 sudo[80081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph
Dec 01 20:31:51 compute-0 sudo[80081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[80081]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[80106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.client.admin.keyring.new
Dec 01 20:31:51 compute-0 sudo[80106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[80106]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 ansible-async_wrapper.py[80080]: Invoked with j537136416627 30 /home/zuul/.ansible/tmp/ansible-tmp-1764621111.0663016-36700-204721117536863/AnsiballZ_command.py _
Dec 01 20:31:51 compute-0 ansible-async_wrapper.py[80156]: Starting module and watcher
Dec 01 20:31:51 compute-0 sudo[80131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:51 compute-0 ansible-async_wrapper.py[80156]: Start watching 80158 (30)
Dec 01 20:31:51 compute-0 ansible-async_wrapper.py[80158]: Start module (80158)
Dec 01 20:31:51 compute-0 sudo[80131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 ansible-async_wrapper.py[80080]: Return async_wrapper task started.
Dec 01 20:31:51 compute-0 sudo[80131]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[80076]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 sudo[80161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.client.admin.keyring.new
Dec 01 20:31:51 compute-0 sudo[80161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[80161]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 python3[80160]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:31:51 compute-0 sudo[80209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.client.admin.keyring.new
Dec 01 20:31:51 compute-0 sudo[80209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[80209]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 podman[80232]: 2025-12-01 20:31:51.858563282 +0000 UTC m=+0.040083789 container create 080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6 (image=quay.io/ceph/ceph:v20, name=jolly_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 20:31:51 compute-0 systemd[1]: Started libpod-conmon-080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6.scope.
Dec 01 20:31:51 compute-0 sudo[80240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.client.admin.keyring.new
Dec 01 20:31:51 compute-0 sudo[80240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[80240]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:51 compute-0 podman[80232]: 2025-12-01 20:31:51.841635424 +0000 UTC m=+0.023155971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16d8d385f3a0048d5fc5378dc2e07b9cccb5ac6b52b31f065145396aa85d16a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16d8d385f3a0048d5fc5378dc2e07b9cccb5ac6b52b31f065145396aa85d16a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:51 compute-0 podman[80232]: 2025-12-01 20:31:51.954670248 +0000 UTC m=+0.136190785 container init 080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6 (image=quay.io/ceph/ceph:v20, name=jolly_wescoff, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:51 compute-0 podman[80232]: 2025-12-01 20:31:51.960760383 +0000 UTC m=+0.142280900 container start 080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6 (image=quay.io/ceph/ceph:v20, name=jolly_wescoff, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 01 20:31:51 compute-0 podman[80232]: 2025-12-01 20:31:51.964216768 +0000 UTC m=+0.145737305 container attach 080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6 (image=quay.io/ceph/ceph:v20, name=jolly_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 01 20:31:51 compute-0 sudo[80277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 01 20:31:51 compute-0 sudo[80277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:51 compute-0 sudo[80277]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:51 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring
Dec 01 20:31:51 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring
Dec 01 20:31:52 compute-0 sudo[80303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config
Dec 01 20:31:52 compute-0 sudo[80303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80303]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 sudo[80328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config
Dec 01 20:31:52 compute-0 sudo[80328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80328]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 sudo[80372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring.new
Dec 01 20:31:52 compute-0 sudo[80372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80372]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 sudo[80397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:52 compute-0 sudo[80397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80397]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 sudo[80422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring.new
Dec 01 20:31:52 compute-0 sudo[80422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80422]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 sudo[80470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring.new
Dec 01 20:31:52 compute-0 sudo[80470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80470]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:31:52 compute-0 jolly_wescoff[80273]: 
Dec 01 20:31:52 compute-0 jolly_wescoff[80273]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 01 20:31:52 compute-0 ceph-mon[75880]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 01 20:31:52 compute-0 ceph-mon[75880]: Updating compute-0:/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring
Dec 01 20:31:52 compute-0 systemd[1]: libpod-080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6.scope: Deactivated successfully.
Dec 01 20:31:52 compute-0 podman[80232]: 2025-12-01 20:31:52.423820489 +0000 UTC m=+0.605341036 container died 080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6 (image=quay.io/ceph/ceph:v20, name=jolly_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:31:52 compute-0 ceph-mgr[76174]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 01 20:31:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:52 compute-0 ceph-mon[75880]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 01 20:31:52 compute-0 sudo[80495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring.new
Dec 01 20:31:52 compute-0 sudo[80495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80495]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c16d8d385f3a0048d5fc5378dc2e07b9cccb5ac6b52b31f065145396aa85d16a-merged.mount: Deactivated successfully.
Dec 01 20:31:52 compute-0 podman[80232]: 2025-12-01 20:31:52.462350399 +0000 UTC m=+0.643870916 container remove 080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6 (image=quay.io/ceph/ceph:v20, name=jolly_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:31:52 compute-0 systemd[1]: libpod-conmon-080b5153ffa479d4ed19176506ce5e42ad204446e795dc7368d8b1500fd32cc6.scope: Deactivated successfully.
Dec 01 20:31:52 compute-0 ansible-async_wrapper.py[80158]: Module complete (80158)
Dec 01 20:31:52 compute-0 sudo[80524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-dcf60a89-bba0-58b0-a1bf-d4bde723201b/var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring.new /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/config/ceph.client.admin.keyring
Dec 01 20:31:52 compute-0 sudo[80524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80524]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:31:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:52 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev f236fe5a-12ee-4b85-8e96-ee0c01949e19 (Updating crash deployment (+1 -> 1))
Dec 01 20:31:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 01 20:31:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 01 20:31:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 01 20:31:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:52 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:52 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 01 20:31:52 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 01 20:31:52 compute-0 sudo[80557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:52 compute-0 sudo[80557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80557]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:52 compute-0 sudo[80582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:52 compute-0 sudo[80582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:52 compute-0 sudo[80692]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynjxoqkimaoqfudebiekhnsynqbxwznd ; /usr/bin/python3'
Dec 01 20:31:52 compute-0 sudo[80692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:52 compute-0 podman[80693]: 2025-12-01 20:31:52.984780403 +0000 UTC m=+0.035936053 container create 26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noether, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:53 compute-0 systemd[1]: Started libpod-conmon-26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb.scope.
Dec 01 20:31:53 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:53 compute-0 podman[80693]: 2025-12-01 20:31:53.059021404 +0000 UTC m=+0.110177054 container init 26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:53 compute-0 podman[80693]: 2025-12-01 20:31:53.064553447 +0000 UTC m=+0.115709097 container start 26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:53 compute-0 podman[80693]: 2025-12-01 20:31:52.968005084 +0000 UTC m=+0.019160754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:31:53 compute-0 sleepy_noether[80710]: 167 167
Dec 01 20:31:53 compute-0 systemd[1]: libpod-26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb.scope: Deactivated successfully.
Dec 01 20:31:53 compute-0 podman[80693]: 2025-12-01 20:31:53.067876915 +0000 UTC m=+0.119032565 container attach 26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noether, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:31:53 compute-0 conmon[80710]: conmon 26dffc3eab5e7e0189df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb.scope/container/memory.events
Dec 01 20:31:53 compute-0 podman[80693]: 2025-12-01 20:31:53.070130102 +0000 UTC m=+0.121285752 container died 26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noether, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-05adbbd1ea855444608f0b354d945b0715cee591de54ee56dd97dc80dc68ea82-merged.mount: Deactivated successfully.
Dec 01 20:31:53 compute-0 python3[80701]: ansible-ansible.legacy.async_status Invoked with jid=j537136416627.80080 mode=status _async_dir=/root/.ansible_async
Dec 01 20:31:53 compute-0 sudo[80692]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:53 compute-0 podman[80693]: 2025-12-01 20:31:53.105005615 +0000 UTC m=+0.156161265 container remove 26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_noether, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:31:53 compute-0 systemd[1]: libpod-conmon-26dffc3eab5e7e0189df3163f713f281bf31180d84793cf4ad9805b730753efb.scope: Deactivated successfully.
Dec 01 20:31:53 compute-0 systemd[1]: Reloading.
Dec 01 20:31:53 compute-0 systemd-rc-local-generator[80801]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:53 compute-0 systemd-sysv-generator[80806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:53 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:53 compute-0 sudo[80808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcwzlauznkeqmfecejdtfikwomzgkwcz ; /usr/bin/python3'
Dec 01 20:31:53 compute-0 sudo[80808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:53 compute-0 systemd[1]: Reloading.
Dec 01 20:31:53 compute-0 ceph-mon[75880]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:31:53 compute-0 ceph-mon[75880]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:53 compute-0 ceph-mon[75880]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 01 20:31:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 01 20:31:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 01 20:31:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:53 compute-0 ceph-mon[75880]: Deploying daemon crash.compute-0 on compute-0
Dec 01 20:31:53 compute-0 systemd-sysv-generator[80845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:53 compute-0 systemd-rc-local-generator[80839]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:53 compute-0 python3[80813]: ansible-ansible.legacy.async_status Invoked with jid=j537136416627.80080 mode=cleanup _async_dir=/root/.ansible_async
Dec 01 20:31:53 compute-0 sudo[80808]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:53 compute-0 systemd[1]: Starting Ceph crash.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:31:53 compute-0 sudo[80901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcuinkwjizxdqibtvvbvdwhgzgpuwizr ; /usr/bin/python3'
Dec 01 20:31:53 compute-0 sudo[80901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:53 compute-0 podman[80925]: 2025-12-01 20:31:53.937907964 +0000 UTC m=+0.046865272 container create 83fed8c2b0dce14654a477950263f4f695328f62e55a3005e83a2024b4c67359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b75585ad98da052dae8275a6d8297ced329a2afe32dd20a4dd2b3d64b63f903/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b75585ad98da052dae8275a6d8297ced329a2afe32dd20a4dd2b3d64b63f903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b75585ad98da052dae8275a6d8297ced329a2afe32dd20a4dd2b3d64b63f903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b75585ad98da052dae8275a6d8297ced329a2afe32dd20a4dd2b3d64b63f903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:53 compute-0 podman[80925]: 2025-12-01 20:31:53.993727462 +0000 UTC m=+0.102684800 container init 83fed8c2b0dce14654a477950263f4f695328f62e55a3005e83a2024b4c67359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:31:53 compute-0 python3[80908]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 01 20:31:54 compute-0 podman[80925]: 2025-12-01 20:31:53.999992677 +0000 UTC m=+0.108949985 container start 83fed8c2b0dce14654a477950263f4f695328f62e55a3005e83a2024b4c67359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:54 compute-0 bash[80925]: 83fed8c2b0dce14654a477950263f4f695328f62e55a3005e83a2024b4c67359
Dec 01 20:31:54 compute-0 podman[80925]: 2025-12-01 20:31:53.916611399 +0000 UTC m=+0.025568737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:31:54 compute-0 systemd[1]: Started Ceph crash.compute-0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:54 compute-0 sudo[80901]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 01 20:31:54 compute-0 sudo[80582]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:54 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev f236fe5a-12ee-4b85-8e96-ee0c01949e19 (Updating crash deployment (+1 -> 1))
Dec 01 20:31:54 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event f236fe5a-12ee-4b85-8e96-ee0c01949e19 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:54 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev 02ed3e30-d5d5-4523-8989-b2154be9f074 (Updating mgr deployment (+1 -> 2))
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxqcne", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxqcne", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxqcne", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:54 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.gxqcne on compute-0
Dec 01 20:31:54 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.gxqcne on compute-0
Dec 01 20:31:54 compute-0 sudo[80949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:54 compute-0 sudo[80949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:54 compute-0 sudo[80949]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: 2025-12-01T20:31:54.168+0000 7ff0f3dda640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: 2025-12-01T20:31:54.168+0000 7ff0f3dda640 -1 AuthRegistry(0x7ff0ec052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: 2025-12-01T20:31:54.169+0000 7ff0f3dda640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: 2025-12-01T20:31:54.169+0000 7ff0f3dda640 -1 AuthRegistry(0x7ff0f3dd8fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: 2025-12-01T20:31:54.170+0000 7ff0f1b4f640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: 2025-12-01T20:31:54.170+0000 7ff0f3dda640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 01 20:31:54 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-crash-compute-0[80940]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 01 20:31:54 compute-0 sudo[80984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:54 compute-0 sudo[80984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:54 compute-0 sudo[81032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqtgirkhqqxgwedvqyfpmxgtlhpajztt ; /usr/bin/python3'
Dec 01 20:31:54 compute-0 sudo[81032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:31:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:54 compute-0 python3[81034]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:31:54 compute-0 podman[81050]: 2025-12-01 20:31:54.518800316 +0000 UTC m=+0.047192551 container create af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312 (image=quay.io/ceph/ceph:v20, name=boring_visvesvaraya, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:54 compute-0 systemd[1]: Started libpod-conmon-af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312.scope.
Dec 01 20:31:54 compute-0 podman[81050]: 2025-12-01 20:31:54.49217618 +0000 UTC m=+0.020568435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:54 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2a357f750a154f06bd52b975423ec87ce777d0c2f236aa05b870b33ad9ecf6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2a357f750a154f06bd52b975423ec87ce777d0c2f236aa05b870b33ad9ecf6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2a357f750a154f06bd52b975423ec87ce777d0c2f236aa05b870b33ad9ecf6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:54 compute-0 podman[81050]: 2025-12-01 20:31:54.609958684 +0000 UTC m=+0.138350919 container init af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312 (image=quay.io/ceph/ceph:v20, name=boring_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:31:54 compute-0 podman[81050]: 2025-12-01 20:31:54.61785123 +0000 UTC m=+0.146243465 container start af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312 (image=quay.io/ceph/ceph:v20, name=boring_visvesvaraya, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:31:54 compute-0 podman[81050]: 2025-12-01 20:31:54.623219464 +0000 UTC m=+0.151611749 container attach af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312 (image=quay.io/ceph/ceph:v20, name=boring_visvesvaraya, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:54 compute-0 podman[81093]: 2025-12-01 20:31:54.633913998 +0000 UTC m=+0.052050455 container create 7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wilson, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:54 compute-0 systemd[1]: Started libpod-conmon-7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879.scope.
Dec 01 20:31:54 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:54 compute-0 podman[81093]: 2025-12-01 20:31:54.688621124 +0000 UTC m=+0.106757581 container init 7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Dec 01 20:31:54 compute-0 podman[81093]: 2025-12-01 20:31:54.694280644 +0000 UTC m=+0.112417101 container start 7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 01 20:31:54 compute-0 lucid_wilson[81111]: 167 167
Dec 01 20:31:54 compute-0 systemd[1]: libpod-7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879.scope: Deactivated successfully.
Dec 01 20:31:54 compute-0 podman[81093]: 2025-12-01 20:31:54.698320052 +0000 UTC m=+0.116456529 container attach 7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:31:54 compute-0 podman[81093]: 2025-12-01 20:31:54.698749396 +0000 UTC m=+0.116885853 container died 7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wilson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:31:54 compute-0 podman[81093]: 2025-12-01 20:31:54.616823501 +0000 UTC m=+0.034959988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a15e4fb21f84b770312b4f3ccd86d1ee4b8cf66ac32276d2154d9a97f5d3fcae-merged.mount: Deactivated successfully.
Dec 01 20:31:54 compute-0 podman[81093]: 2025-12-01 20:31:54.73787179 +0000 UTC m=+0.156008257 container remove 7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:31:54 compute-0 systemd[1]: libpod-conmon-7f770d306281f1c00744072a2d33dc29f71b69a94eb41e1ab4a662b2f6627879.scope: Deactivated successfully.
Dec 01 20:31:54 compute-0 systemd[1]: Reloading.
Dec 01 20:31:54 compute-0 systemd-sysv-generator[81176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:54 compute-0 systemd-rc-local-generator[81173]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:55 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:31:55 compute-0 boring_visvesvaraya[81091]: 
Dec 01 20:31:55 compute-0 boring_visvesvaraya[81091]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 01 20:31:55 compute-0 systemd[1]: libpod-af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312.scope: Deactivated successfully.
Dec 01 20:31:55 compute-0 podman[81050]: 2025-12-01 20:31:55.040295198 +0000 UTC m=+0.568687433 container died af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312 (image=quay.io/ceph/ceph:v20, name=boring_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxqcne", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxqcne", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:31:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:55 compute-0 ceph-mon[75880]: Deploying daemon mgr.compute-0.gxqcne on compute-0
Dec 01 20:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c2a357f750a154f06bd52b975423ec87ce777d0c2f236aa05b870b33ad9ecf6-merged.mount: Deactivated successfully.
Dec 01 20:31:55 compute-0 podman[81050]: 2025-12-01 20:31:55.081008701 +0000 UTC m=+0.609400936 container remove af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312 (image=quay.io/ceph/ceph:v20, name=boring_visvesvaraya, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:55 compute-0 systemd[1]: Reloading.
Dec 01 20:31:55 compute-0 sudo[81032]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:55 compute-0 systemd-rc-local-generator[81224]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:31:55 compute-0 systemd-sysv-generator[81227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:31:55 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:55 compute-0 systemd[1]: libpod-conmon-af91f4ec8198e3ef619dcd6776707d6e4e517a2b93e1d65059423dfbb635d312.scope: Deactivated successfully.
Dec 01 20:31:55 compute-0 systemd[1]: Starting Ceph mgr.compute-0.gxqcne for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:31:55 compute-0 sudo[81260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxsyrzsxmuzmxfzvheqzqeyttlgoenlb ; /usr/bin/python3'
Dec 01 20:31:55 compute-0 sudo[81260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:55 compute-0 python3[81269]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:31:55 compute-0 podman[81308]: 2025-12-01 20:31:55.577055014 +0000 UTC m=+0.040675412 container create 1b1ad249873f1801f24124784ee3aa0a0c607088da90a17e3f0f0eed816cfc5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 01 20:31:55 compute-0 podman[81315]: 2025-12-01 20:31:55.605214797 +0000 UTC m=+0.044838698 container create a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb (image=quay.io/ceph/ceph:v20, name=laughing_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee30828144dc0e86be630a81a83f40c39202e8c19cedb227fc7355a137612c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee30828144dc0e86be630a81a83f40c39202e8c19cedb227fc7355a137612c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee30828144dc0e86be630a81a83f40c39202e8c19cedb227fc7355a137612c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ee30828144dc0e86be630a81a83f40c39202e8c19cedb227fc7355a137612c/merged/var/lib/ceph/mgr/ceph-compute-0.gxqcne supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:55 compute-0 podman[81308]: 2025-12-01 20:31:55.627949093 +0000 UTC m=+0.091569521 container init 1b1ad249873f1801f24124784ee3aa0a0c607088da90a17e3f0f0eed816cfc5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:55 compute-0 podman[81308]: 2025-12-01 20:31:55.635333981 +0000 UTC m=+0.098954379 container start 1b1ad249873f1801f24124784ee3aa0a0c607088da90a17e3f0f0eed816cfc5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:31:55 compute-0 bash[81308]: 1b1ad249873f1801f24124784ee3aa0a0c607088da90a17e3f0f0eed816cfc5b
Dec 01 20:31:55 compute-0 podman[81308]: 2025-12-01 20:31:55.559585016 +0000 UTC m=+0.023205434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:31:55 compute-0 systemd[1]: Started libpod-conmon-a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb.scope.
Dec 01 20:31:55 compute-0 systemd[1]: Started Ceph mgr.compute-0.gxqcne for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:31:55 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22edbd707c2e084b17249fc716d6784a7e0d78835118c254615d9697cd6e8d97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22edbd707c2e084b17249fc716d6784a7e0d78835118c254615d9697cd6e8d97/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22edbd707c2e084b17249fc716d6784a7e0d78835118c254615d9697cd6e8d97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:55 compute-0 podman[81315]: 2025-12-01 20:31:55.679563203 +0000 UTC m=+0.119187124 container init a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb (image=quay.io/ceph/ceph:v20, name=laughing_banzai, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:55 compute-0 podman[81315]: 2025-12-01 20:31:55.587528607 +0000 UTC m=+0.027152538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:55 compute-0 ceph-mgr[81342]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:31:55 compute-0 ceph-mgr[81342]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 01 20:31:55 compute-0 ceph-mgr[81342]: pidfile_write: ignore empty --pid-file
Dec 01 20:31:55 compute-0 sudo[80984]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:55 compute-0 podman[81315]: 2025-12-01 20:31:55.687577453 +0000 UTC m=+0.127201354 container start a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb (image=quay.io/ceph/ceph:v20, name=laughing_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:55 compute-0 podman[81315]: 2025-12-01 20:31:55.690615667 +0000 UTC m=+0.130239588 container attach a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb (image=quay.io/ceph/ceph:v20, name=laughing_banzai, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 20:31:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 01 20:31:55 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'alerts'
Dec 01 20:31:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev 02ed3e30-d5d5-4523-8989-b2154be9f074 (Updating mgr deployment (+1 -> 2))
Dec 01 20:31:55 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event 02ed3e30-d5d5-4523-8989-b2154be9f074 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec 01 20:31:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 01 20:31:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:55 compute-0 sudo[81367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:31:55 compute-0 sudo[81367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:55 compute-0 sudo[81367]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:55 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'balancer'
Dec 01 20:31:55 compute-0 sudo[81394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:55 compute-0 sudo[81394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:55 compute-0 sudo[81394]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:55 compute-0 sudo[81436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:31:55 compute-0 sudo[81436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:55 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'cephadm'
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:56 compute-0 ceph-mon[75880]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:31:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2495357742' entity='client.admin' 
Dec 01 20:31:56 compute-0 systemd[1]: libpod-a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb.scope: Deactivated successfully.
Dec 01 20:31:56 compute-0 podman[81490]: 2025-12-01 20:31:56.204102506 +0000 UTC m=+0.022004936 container died a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb (image=quay.io/ceph/ceph:v20, name=laughing_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 01 20:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-22edbd707c2e084b17249fc716d6784a7e0d78835118c254615d9697cd6e8d97-merged.mount: Deactivated successfully.
Dec 01 20:31:56 compute-0 podman[81490]: 2025-12-01 20:31:56.24625417 +0000 UTC m=+0.064156580 container remove a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb (image=quay.io/ceph/ceph:v20, name=laughing_banzai, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:56 compute-0 systemd[1]: libpod-conmon-a0ae9e17e28a1b874211b89897c77c6206d3c55b1b24211c02e6ddab3b2469fb.scope: Deactivated successfully.
Dec 01 20:31:56 compute-0 sudo[81260]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:56 compute-0 podman[81520]: 2025-12-01 20:31:56.345471242 +0000 UTC m=+0.060009902 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:31:56 compute-0 sudo[81574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecfpadzqfpasubowwqnugzqlfkixzijt ; /usr/bin/python3'
Dec 01 20:31:56 compute-0 sudo[81574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:56 compute-0 podman[81520]: 2025-12-01 20:31:56.460533377 +0000 UTC m=+0.175072047 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:56 compute-0 python3[81577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:31:56 compute-0 podman[81614]: 2025-12-01 20:31:56.623969679 +0000 UTC m=+0.046307874 container create 82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17 (image=quay.io/ceph/ceph:v20, name=awesome_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:31:56 compute-0 systemd[1]: Started libpod-conmon-82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17.scope.
Dec 01 20:31:56 compute-0 ansible-async_wrapper.py[80156]: Done in kid B.
Dec 01 20:31:56 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64dce4b7d0ce01376f5aec4ded4958a1cc63a7e042a0be4c5a6ef06315d8423/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64dce4b7d0ce01376f5aec4ded4958a1cc63a7e042a0be4c5a6ef06315d8423/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64dce4b7d0ce01376f5aec4ded4958a1cc63a7e042a0be4c5a6ef06315d8423/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:56 compute-0 podman[81614]: 2025-12-01 20:31:56.601136768 +0000 UTC m=+0.023474983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:56 compute-0 podman[81614]: 2025-12-01 20:31:56.701050861 +0000 UTC m=+0.123389076 container init 82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17 (image=quay.io/ceph/ceph:v20, name=awesome_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:56 compute-0 podman[81614]: 2025-12-01 20:31:56.706831251 +0000 UTC m=+0.129169436 container start 82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17 (image=quay.io/ceph/ceph:v20, name=awesome_sinoussi, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:56 compute-0 podman[81614]: 2025-12-01 20:31:56.72027657 +0000 UTC m=+0.142614795 container attach 82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17 (image=quay.io/ceph/ceph:v20, name=awesome_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:56 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'crash'
Dec 01 20:31:56 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'dashboard'
Dec 01 20:31:56 compute-0 sudo[81436]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:56 compute-0 sudo[81705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:31:56 compute-0 sudo[81705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:56 compute-0 sudo[81705]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:56 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 01 20:31:56 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 01 20:31:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:56 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:56 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 01 20:31:56 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 01 20:31:56 compute-0 sudo[81730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:57 compute-0 sudo[81730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:57 compute-0 sudo[81730]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:57 compute-0 sudo[81755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:57 compute-0 sudo[81755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1040757941' entity='client.admin' 
Dec 01 20:31:57 compute-0 systemd[1]: libpod-82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17.scope: Deactivated successfully.
Dec 01 20:31:57 compute-0 podman[81614]: 2025-12-01 20:31:57.143375523 +0000 UTC m=+0.565713718 container died 82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17 (image=quay.io/ceph/ceph:v20, name=awesome_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2495357742' entity='client.admin' 
Dec 01 20:31:57 compute-0 ceph-mon[75880]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:57 compute-0 ceph-mon[75880]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:57 compute-0 ceph-mon[75880]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 01 20:31:57 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1040757941' entity='client.admin' 
Dec 01 20:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a64dce4b7d0ce01376f5aec4ded4958a1cc63a7e042a0be4c5a6ef06315d8423-merged.mount: Deactivated successfully.
Dec 01 20:31:57 compute-0 podman[81614]: 2025-12-01 20:31:57.192355719 +0000 UTC m=+0.614693914 container remove 82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17 (image=quay.io/ceph/ceph:v20, name=awesome_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:31:57 compute-0 systemd[1]: libpod-conmon-82726c746f8a3c7f1ef56d574d1b341cc7db04c6a57a2589d580440e1f774c17.scope: Deactivated successfully.
Dec 01 20:31:57 compute-0 sudo[81574]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:57 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:57 compute-0 podman[81808]: 2025-12-01 20:31:57.335931203 +0000 UTC m=+0.045920572 container create 9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:31:57 compute-0 systemd[1]: Started libpod-conmon-9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130.scope.
Dec 01 20:31:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:57 compute-0 podman[81808]: 2025-12-01 20:31:57.389495122 +0000 UTC m=+0.099484521 container init 9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:57 compute-0 podman[81808]: 2025-12-01 20:31:57.39393136 +0000 UTC m=+0.103920749 container start 9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:31:57 compute-0 relaxed_robinson[81829]: 167 167
Dec 01 20:31:57 compute-0 systemd[1]: libpod-9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130.scope: Deactivated successfully.
Dec 01 20:31:57 compute-0 podman[81808]: 2025-12-01 20:31:57.397833252 +0000 UTC m=+0.107822631 container attach 9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:31:57 compute-0 conmon[81829]: conmon 9d61c22cba4a0c5bbf98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130.scope/container/memory.events
Dec 01 20:31:57 compute-0 podman[81808]: 2025-12-01 20:31:57.398840193 +0000 UTC m=+0.108829562 container died 9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 20:31:57 compute-0 sudo[81851]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmrincndrnzqohoclznzwynytcairyq ; /usr/bin/python3'
Dec 01 20:31:57 compute-0 podman[81808]: 2025-12-01 20:31:57.313674589 +0000 UTC m=+0.023664038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:57 compute-0 sudo[81851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0071b73288e2741621ffaacaf1f5f60817227c8b195e08c7f5f808ea47fe489-merged.mount: Deactivated successfully.
Dec 01 20:31:57 compute-0 podman[81808]: 2025-12-01 20:31:57.43532723 +0000 UTC m=+0.145316599 container remove 9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:31:57 compute-0 systemd[1]: libpod-conmon-9d61c22cba4a0c5bbf9880b4a1110084594282adb750c809809bce75a86e7130.scope: Deactivated successfully.
Dec 01 20:31:57 compute-0 sudo[81755]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:57 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.xhvuzu (unknown last config time)...
Dec 01 20:31:57 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.xhvuzu (unknown last config time)...
Dec 01 20:31:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.xhvuzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.xhvuzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 01 20:31:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:31:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:57 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.xhvuzu on compute-0
Dec 01 20:31:57 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.xhvuzu on compute-0
Dec 01 20:31:57 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'devicehealth'
Dec 01 20:31:57 compute-0 python3[81855]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:31:57 compute-0 sudo[81868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:57 compute-0 sudo[81868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:57 compute-0 sudo[81868]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:57 compute-0 podman[81891]: 2025-12-01 20:31:57.603606163 +0000 UTC m=+0.039123420 container create ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53 (image=quay.io/ceph/ceph:v20, name=fervent_brattain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:31:57 compute-0 sudo[81899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:31:57 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'diskprediction_local'
Dec 01 20:31:57 compute-0 sudo[81899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:57 compute-0 systemd[1]: Started libpod-conmon-ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53.scope.
Dec 01 20:31:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38adcf068818758ff7277cd7035da083656d10be070b7387f0e6b9c775fef2e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38adcf068818758ff7277cd7035da083656d10be070b7387f0e6b9c775fef2e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38adcf068818758ff7277cd7035da083656d10be070b7387f0e6b9c775fef2e1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:57 compute-0 podman[81891]: 2025-12-01 20:31:57.681920903 +0000 UTC m=+0.117438230 container init ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53 (image=quay.io/ceph/ceph:v20, name=fervent_brattain, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:57 compute-0 podman[81891]: 2025-12-01 20:31:57.587755729 +0000 UTC m=+0.023272996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:57 compute-0 podman[81891]: 2025-12-01 20:31:57.693198794 +0000 UTC m=+0.128716051 container start ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53 (image=quay.io/ceph/ceph:v20, name=fervent_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:31:57 compute-0 podman[81891]: 2025-12-01 20:31:57.696817707 +0000 UTC m=+0.132335014 container attach ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53 (image=quay.io/ceph/ceph:v20, name=fervent_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:57 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne[81336]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 01 20:31:57 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne[81336]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 01 20:31:57 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne[81336]:   from numpy import show_config as show_numpy_config
Dec 01 20:31:57 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'influx'
Dec 01 20:31:57 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'insights'
Dec 01 20:31:57 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'iostat'
Dec 01 20:31:57 compute-0 podman[81972]: 2025-12-01 20:31:57.967663146 +0000 UTC m=+0.033932617 container create 4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5 (image=quay.io/ceph/ceph:v20, name=suspicious_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:57 compute-0 systemd[1]: Started libpod-conmon-4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5.scope.
Dec 01 20:31:58 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'k8sevents'
Dec 01 20:31:58 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:58 compute-0 podman[81972]: 2025-12-01 20:31:58.033112006 +0000 UTC m=+0.099381517 container init 4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5 (image=quay.io/ceph/ceph:v20, name=suspicious_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:31:58 compute-0 podman[81972]: 2025-12-01 20:31:58.038116862 +0000 UTC m=+0.104386333 container start 4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5 (image=quay.io/ceph/ceph:v20, name=suspicious_chebyshev, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:58 compute-0 suspicious_chebyshev[81988]: 167 167
Dec 01 20:31:58 compute-0 podman[81972]: 2025-12-01 20:31:58.041647892 +0000 UTC m=+0.107917413 container attach 4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5 (image=quay.io/ceph/ceph:v20, name=suspicious_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:31:58 compute-0 systemd[1]: libpod-4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5.scope: Deactivated successfully.
Dec 01 20:31:58 compute-0 conmon[81988]: conmon 4d1b3bc60a422c7926d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5.scope/container/memory.events
Dec 01 20:31:58 compute-0 podman[81972]: 2025-12-01 20:31:58.044213122 +0000 UTC m=+0.110482593 container died 4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5 (image=quay.io/ceph/ceph:v20, name=suspicious_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:58 compute-0 podman[81972]: 2025-12-01 20:31:57.952942648 +0000 UTC m=+0.019212139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e206fad3baa640860e17466a204752acac3d4c502b84d6bcb925dd0315924c3-merged.mount: Deactivated successfully.
Dec 01 20:31:58 compute-0 podman[81972]: 2025-12-01 20:31:58.079547202 +0000 UTC m=+0.145816684 container remove 4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5 (image=quay.io/ceph/ceph:v20, name=suspicious_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 01 20:31:58 compute-0 systemd[1]: libpod-conmon-4d1b3bc60a422c7926d9b232801b6d22f5bdb3d23b7b0e5aa9e42077a973fcb5.scope: Deactivated successfully.
Dec 01 20:31:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 01 20:31:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637924612' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 01 20:31:58 compute-0 sudo[81899]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 sudo[82005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:58 compute-0 sudo[82005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:58 compute-0 sudo[82005]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:58 compute-0 ceph-mgr[76174]: [progress INFO root] Writing back 2 completed events
Dec 01 20:31:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 01 20:31:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 sudo[82030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:31:58 compute-0 sudo[82030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:58 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'localpool'
Dec 01 20:31:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:58 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'mds_autoscaler'
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 ceph-mon[75880]: Reconfiguring mgr.compute-0.xhvuzu (unknown last config time)...
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.xhvuzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:58 compute-0 ceph-mon[75880]: Reconfiguring daemon mgr.compute-0.xhvuzu on compute-0
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1637924612' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 01 20:31:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:31:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637924612' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 01 20:31:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 01 20:31:58 compute-0 fervent_brattain[81933]: set require_min_compat_client to mimic
Dec 01 20:31:58 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 01 20:31:58 compute-0 systemd[1]: libpod-ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53.scope: Deactivated successfully.
Dec 01 20:31:58 compute-0 podman[81891]: 2025-12-01 20:31:58.53550555 +0000 UTC m=+0.971022797 container died ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53 (image=quay.io/ceph/ceph:v20, name=fervent_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-38adcf068818758ff7277cd7035da083656d10be070b7387f0e6b9c775fef2e1-merged.mount: Deactivated successfully.
Dec 01 20:31:58 compute-0 podman[81891]: 2025-12-01 20:31:58.58781267 +0000 UTC m=+1.023329957 container remove ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53 (image=quay.io/ceph/ceph:v20, name=fervent_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:31:58 compute-0 systemd[1]: libpod-conmon-ec14ea2a34ce0fa399747d6e66b2ba2545b10589f3594ed5e71a50b80d27ad53.scope: Deactivated successfully.
Dec 01 20:31:58 compute-0 sudo[81851]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:58 compute-0 podman[82113]: 2025-12-01 20:31:58.691667336 +0000 UTC m=+0.050730412 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:31:58 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'mirroring'
Dec 01 20:31:58 compute-0 podman[82113]: 2025-12-01 20:31:58.776631043 +0000 UTC m=+0.135694159 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:31:58 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'nfs'
Dec 01 20:31:59 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'orchestrator'
Dec 01 20:31:59 compute-0 sudo[82232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxiwqcngavsxyvuetjwynpuvrjrmzdk ; /usr/bin/python3'
Dec 01 20:31:59 compute-0 sudo[82232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:31:59 compute-0 sudo[82030]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:31:59 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:31:59 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:31:59 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:31:59 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:31:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:31:59 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:59 compute-0 python3[82236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:31:59 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:31:59 compute-0 sudo[82253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:31:59 compute-0 sudo[82253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:59 compute-0 sudo[82253]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:59 compute-0 podman[82254]: 2025-12-01 20:31:59.268979854 +0000 UTC m=+0.038812111 container create e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374 (image=quay.io/ceph/ceph:v20, name=vigorous_lewin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:31:59 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'osd_perf_query'
Dec 01 20:31:59 compute-0 systemd[1]: Started libpod-conmon-e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374.scope.
Dec 01 20:31:59 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c4511e7bcfdd83dd4c2eb167f7ee6bd13574c8e3c5c0e407f4ef2913bffc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c4511e7bcfdd83dd4c2eb167f7ee6bd13574c8e3c5c0e407f4ef2913bffc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c4511e7bcfdd83dd4c2eb167f7ee6bd13574c8e3c5c0e407f4ef2913bffc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:31:59 compute-0 podman[82254]: 2025-12-01 20:31:59.328579281 +0000 UTC m=+0.098411528 container init e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374 (image=quay.io/ceph/ceph:v20, name=vigorous_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:31:59 compute-0 podman[82254]: 2025-12-01 20:31:59.335426734 +0000 UTC m=+0.105258981 container start e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374 (image=quay.io/ceph/ceph:v20, name=vigorous_lewin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 01 20:31:59 compute-0 podman[82254]: 2025-12-01 20:31:59.33850512 +0000 UTC m=+0.108337387 container attach e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374 (image=quay.io/ceph/ceph:v20, name=vigorous_lewin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:31:59 compute-0 podman[82254]: 2025-12-01 20:31:59.250831048 +0000 UTC m=+0.020663325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:31:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:31:59 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'osd_support'
Dec 01 20:31:59 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'pg_autoscaler'
Dec 01 20:31:59 compute-0 ceph-mon[75880]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:31:59 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1637924612' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 01 20:31:59 compute-0 ceph-mon[75880]: osdmap e3: 0 total, 0 up, 0 in
Dec 01 20:31:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:31:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:31:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:31:59 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'progress'
Dec 01 20:31:59 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'prometheus'
Dec 01 20:31:59 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:31:59 compute-0 sudo[82318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:31:59 compute-0 sudo[82318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:59 compute-0 sudo[82318]: pam_unix(sudo:session): session closed for user root
Dec 01 20:31:59 compute-0 sudo[82343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 01 20:31:59 compute-0 sudo[82343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:31:59 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'rbd_support'
Dec 01 20:32:00 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'rgw'
Dec 01 20:32:00 compute-0 sudo[82343]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: [cephadm INFO root] Added host compute-0
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 01 20:32:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev ef3fc12f-234a-4765-899d-3ff6fd0d6720 (Updating mgr deployment (-1 -> 1))
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.gxqcne from compute-0 -- ports [8765]
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.gxqcne from compute-0 -- ports [8765]
Dec 01 20:32:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 vigorous_lewin[82294]: Added host 'compute-0' with addr '192.168.122.100'
Dec 01 20:32:00 compute-0 vigorous_lewin[82294]: Scheduled mon update...
Dec 01 20:32:00 compute-0 vigorous_lewin[82294]: Scheduled mgr update...
Dec 01 20:32:00 compute-0 vigorous_lewin[82294]: Scheduled osd.default_drive_group update...
Dec 01 20:32:00 compute-0 systemd[1]: libpod-e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374.scope: Deactivated successfully.
Dec 01 20:32:00 compute-0 podman[82254]: 2025-12-01 20:32:00.252899541 +0000 UTC m=+1.022731788 container died e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374 (image=quay.io/ceph/ceph:v20, name=vigorous_lewin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 01 20:32:00 compute-0 sudo[82388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:00 compute-0 sudo[82388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:00 compute-0 sudo[82388]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab09c4511e7bcfdd83dd4c2eb167f7ee6bd13574c8e3c5c0e407f4ef2913bffc-merged.mount: Deactivated successfully.
Dec 01 20:32:00 compute-0 podman[82254]: 2025-12-01 20:32:00.308166433 +0000 UTC m=+1.077998680 container remove e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374 (image=quay.io/ceph/ceph:v20, name=vigorous_lewin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:00 compute-0 systemd[1]: libpod-conmon-e909c29d5857ca8197c57627c6fb6b5fe00f0a343d4dd5e1ad7bbe8e80a8d374.scope: Deactivated successfully.
Dec 01 20:32:00 compute-0 sudo[82232]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:00 compute-0 sudo[82427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --name mgr.compute-0.gxqcne --force --tcp-ports 8765
Dec 01 20:32:00 compute-0 sudo[82427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:00 compute-0 ceph-mgr[81342]: mgr[py] Loading python module 'rook'
Dec 01 20:32:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:00 compute-0 sudo[82477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcpinkjwtxyvzovanttuporqqbisnjll ; /usr/bin/python3'
Dec 01 20:32:00 compute-0 sudo[82477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:00 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.gxqcne for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:32:00 compute-0 python3[82484]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:00 compute-0 podman[82505]: 2025-12-01 20:32:00.767630679 +0000 UTC m=+0.044276010 container create 21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea (image=quay.io/ceph/ceph:v20, name=blissful_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:32:00 compute-0 systemd[1]: Started libpod-conmon-21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea.scope.
Dec 01 20:32:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5850be49cb5e1ccf58ca97d3de4e80243d15a171751219bba1d8f9a7e7db7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5850be49cb5e1ccf58ca97d3de4e80243d15a171751219bba1d8f9a7e7db7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5850be49cb5e1ccf58ca97d3de4e80243d15a171751219bba1d8f9a7e7db7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:00 compute-0 podman[82505]: 2025-12-01 20:32:00.751622371 +0000 UTC m=+0.028267702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:00 compute-0 podman[82505]: 2025-12-01 20:32:00.855544139 +0000 UTC m=+0.132189480 container init 21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea (image=quay.io/ceph/ceph:v20, name=blissful_pasteur, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:00 compute-0 podman[82505]: 2025-12-01 20:32:00.861870386 +0000 UTC m=+0.138515707 container start 21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea (image=quay.io/ceph/ceph:v20, name=blissful_pasteur, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:00 compute-0 podman[82505]: 2025-12-01 20:32:00.865053425 +0000 UTC m=+0.141698746 container attach 21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea (image=quay.io/ceph/ceph:v20, name=blissful_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:00 compute-0 podman[82535]: 2025-12-01 20:32:00.908960834 +0000 UTC m=+0.106697426 container died 1b1ad249873f1801f24124784ee3aa0a0c607088da90a17e3f0f0eed816cfc5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-03ee30828144dc0e86be630a81a83f40c39202e8c19cedb227fc7355a137612c-merged.mount: Deactivated successfully.
Dec 01 20:32:00 compute-0 podman[82535]: 2025-12-01 20:32:00.960632723 +0000 UTC m=+0.158369315 container remove 1b1ad249873f1801f24124784ee3aa0a0c607088da90a17e3f0f0eed816cfc5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:32:00 compute-0 bash[82535]: ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-gxqcne
Dec 01 20:32:00 compute-0 systemd[1]: ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@mgr.compute-0.gxqcne.service: Main process exited, code=exited, status=143/n/a
Dec 01 20:32:01 compute-0 systemd[1]: ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@mgr.compute-0.gxqcne.service: Failed with result 'exit-code'.
Dec 01 20:32:01 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.gxqcne for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:32:01 compute-0 systemd[1]: ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@mgr.compute-0.gxqcne.service: Consumed 5.942s CPU time, 353.7M memory peak, read 0B from disk, written 156.5K to disk.
Dec 01 20:32:01 compute-0 systemd[1]: Reloading.
Dec 01 20:32:01 compute-0 systemd-sysv-generator[82644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:32:01 compute-0 systemd-rc-local-generator[82640]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:32:01 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 01 20:32:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329458019' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:32:01 compute-0 blissful_pasteur[82541]: 
Dec 01 20:32:01 compute-0 blissful_pasteur[82541]: {"fsid":"dcf60a89-bba0-58b0-a1bf-d4bde723201b","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":47,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-01T20:31:12:115619+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-01T20:31:12.117544+0000","services":{}},"progress_events":{"ef3fc12f-234a-4765-899d-3ff6fd0d6720":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 01 20:32:01 compute-0 systemd[1]: libpod-21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea.scope: Deactivated successfully.
Dec 01 20:32:01 compute-0 podman[82505]: 2025-12-01 20:32:01.414700391 +0000 UTC m=+0.691345732 container died 21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea (image=quay.io/ceph/ceph:v20, name=blissful_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-43f5850be49cb5e1ccf58ca97d3de4e80243d15a171751219bba1d8f9a7e7db7-merged.mount: Deactivated successfully.
Dec 01 20:32:01 compute-0 sudo[82427]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:01 compute-0 podman[82505]: 2025-12-01 20:32:01.451387085 +0000 UTC m=+0.728032406 container remove 21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea (image=quay.io/ceph/ceph:v20, name=blissful_pasteur, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:01 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.gxqcne
Dec 01 20:32:01 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.gxqcne
Dec 01 20:32:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.gxqcne"} v 0)
Dec 01 20:32:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.gxqcne"} : dispatch
Dec 01 20:32:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.gxqcne"}]': finished
Dec 01 20:32:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 01 20:32:01 compute-0 systemd[1]: libpod-conmon-21e7493123bc8cc76008e9e3c3ce1d3740a20f8d20cedba41c6660a00083a1ea.scope: Deactivated successfully.
Dec 01 20:32:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:01 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev ef3fc12f-234a-4765-899d-3ff6fd0d6720 (Updating mgr deployment (-1 -> 1))
Dec 01 20:32:01 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event ef3fc12f-234a-4765-899d-3ff6fd0d6720 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec 01 20:32:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 01 20:32:01 compute-0 sudo[82477]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:01 compute-0 sudo[82670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:32:01 compute-0 sudo[82670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:01 compute-0 sudo[82670]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:01 compute-0 ceph-mon[75880]: Added host compute-0
Dec 01 20:32:01 compute-0 ceph-mon[75880]: Saving service mon spec with placement compute-0
Dec 01 20:32:01 compute-0 ceph-mon[75880]: Saving service mgr spec with placement compute-0
Dec 01 20:32:01 compute-0 ceph-mon[75880]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 01 20:32:01 compute-0 ceph-mon[75880]: Saving service osd.default_drive_group spec with placement compute-0
Dec 01 20:32:01 compute-0 ceph-mon[75880]: Removing daemon mgr.compute-0.gxqcne from compute-0 -- ports [8765]
Dec 01 20:32:01 compute-0 ceph-mon[75880]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2329458019' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:32:01 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.gxqcne"} : dispatch
Dec 01 20:32:01 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.gxqcne"}]': finished
Dec 01 20:32:01 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:01 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:01 compute-0 sudo[82695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:01 compute-0 sudo[82695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:01 compute-0 sudo[82695]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:01 compute-0 sudo[82720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:32:01 compute-0 sudo[82720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:02 compute-0 podman[82788]: 2025-12-01 20:32:02.038832348 +0000 UTC m=+0.067234036 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:02 compute-0 podman[82788]: 2025-12-01 20:32:02.129432191 +0000 UTC m=+0.157833809 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:02 compute-0 sudo[82720]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:02 compute-0 sudo[82885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:02 compute-0 sudo[82885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:02 compute-0 sudo[82885]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:02 compute-0 ceph-mon[75880]: Removing key for mgr.compute-0.gxqcne
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:32:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:02 compute-0 sudo[82910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:32:02 compute-0 sudo[82910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:02 compute-0 podman[82947]: 2025-12-01 20:32:02.850135467 +0000 UTC m=+0.040055148 container create 1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_newton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 20:32:02 compute-0 systemd[1]: Started libpod-conmon-1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284.scope.
Dec 01 20:32:02 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:02 compute-0 podman[82947]: 2025-12-01 20:32:02.832898681 +0000 UTC m=+0.022818382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:02 compute-0 podman[82947]: 2025-12-01 20:32:02.984037861 +0000 UTC m=+0.173957632 container init 1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 01 20:32:02 compute-0 podman[82947]: 2025-12-01 20:32:02.990198981 +0000 UTC m=+0.180118672 container start 1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:32:02 compute-0 podman[82947]: 2025-12-01 20:32:02.99398465 +0000 UTC m=+0.183904421 container attach 1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:02 compute-0 romantic_newton[82963]: 167 167
Dec 01 20:32:02 compute-0 systemd[1]: libpod-1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284.scope: Deactivated successfully.
Dec 01 20:32:02 compute-0 podman[82947]: 2025-12-01 20:32:02.995832997 +0000 UTC m=+0.185752718 container died 1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_newton, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc88d9ce18d20b7acff50687db67edafac7b8cb6659e25e1104a27168d94ee4-merged.mount: Deactivated successfully.
Dec 01 20:32:03 compute-0 podman[82947]: 2025-12-01 20:32:03.04408353 +0000 UTC m=+0.234003231 container remove 1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_newton, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:32:03 compute-0 systemd[1]: libpod-conmon-1558dc3a14d5db2c0f7e211a1b92840bb472d26885b912bf3655c90d566e3284.scope: Deactivated successfully.
Dec 01 20:32:03 compute-0 podman[82989]: 2025-12-01 20:32:03.184345631 +0000 UTC m=+0.035487687 container create 41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:32:03 compute-0 systemd[1]: Started libpod-conmon-41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1.scope.
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [progress INFO root] Writing back 3 completed events
Dec 01 20:32:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:32:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:32:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:32:03 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c385be391ad061bc8847f0d40b6755e8cc2d5ecbc0929378cfaf86920160d283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c385be391ad061bc8847f0d40b6755e8cc2d5ecbc0929378cfaf86920160d283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c385be391ad061bc8847f0d40b6755e8cc2d5ecbc0929378cfaf86920160d283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c385be391ad061bc8847f0d40b6755e8cc2d5ecbc0929378cfaf86920160d283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c385be391ad061bc8847f0d40b6755e8cc2d5ecbc0929378cfaf86920160d283/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:03 compute-0 podman[82989]: 2025-12-01 20:32:03.169207909 +0000 UTC m=+0.020349975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:03 compute-0 podman[82989]: 2025-12-01 20:32:03.27578918 +0000 UTC m=+0.126931326 container init 41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jones, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:32:03 compute-0 podman[82989]: 2025-12-01 20:32:03.287436653 +0000 UTC m=+0.138578709 container start 41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:03 compute-0 podman[82989]: 2025-12-01 20:32:03.290640243 +0000 UTC m=+0.141782339 container attach 41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:32:03 compute-0 ceph-mon[75880]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:03 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f68c8d6d-1275-44aa-87ed-4bb7c5666585
Dec 01 20:32:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585"} v 0)
Dec 01 20:32:04 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46840586' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585"} : dispatch
Dec 01 20:32:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 01 20:32:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:04 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46840586' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585"}]': finished
Dec 01 20:32:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 01 20:32:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 01 20:32:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:04 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:04 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/46840586' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585"} : dispatch
Dec 01 20:32:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/46840586' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585"}]': finished
Dec 01 20:32:04 compute-0 ceph-mon[75880]: osdmap e4: 1 total, 0 up, 1 in
Dec 01 20:32:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:04 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 01 20:32:04 compute-0 lvm[83101]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:32:04 compute-0 lvm[83101]: VG ceph_vg0 finished
Dec 01 20:32:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 01 20:32:05 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653835271' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 01 20:32:05 compute-0 intelligent_jones[83006]:  stderr: got monmap epoch 1
Dec 01 20:32:05 compute-0 intelligent_jones[83006]: --> Creating keyring file for osd.0
Dec 01 20:32:05 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:05 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 01 20:32:05 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 01 20:32:05 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f68c8d6d-1275-44aa-87ed-4bb7c5666585 --setuser ceph --setgroup ceph
Dec 01 20:32:05 compute-0 ceph-mon[75880]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:05 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2653835271' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 01 20:32:06 compute-0 intelligent_jones[83006]:  stderr: 2025-12-01T20:32:05.310+0000 7fc52478e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 01 20:32:06 compute-0 intelligent_jones[83006]:  stderr: 2025-12-01T20:32:05.327+0000 7fc52478e8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79
Dec 01 20:32:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:06 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 01 20:32:06 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 01 20:32:06 compute-0 ceph-mon[75880]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 01 20:32:06 compute-0 ceph-mon[75880]: Cluster is now healthy
Dec 01 20:32:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79"} v 0)
Dec 01 20:32:06 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1895835347' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79"} : dispatch
Dec 01 20:32:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 01 20:32:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:06 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1895835347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79"}]': finished
Dec 01 20:32:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 01 20:32:06 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 01 20:32:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:06 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:06 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:06 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:06 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:06 compute-0 lvm[84034]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:32:06 compute-0 lvm[84034]: VG ceph_vg1 finished
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:06 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 01 20:32:07 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 01 20:32:07 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820532060' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 01 20:32:07 compute-0 intelligent_jones[83006]:  stderr: got monmap epoch 1
Dec 01 20:32:07 compute-0 intelligent_jones[83006]: --> Creating keyring file for osd.1
Dec 01 20:32:07 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 01 20:32:07 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 01 20:32:07 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79 --setuser ceph --setgroup ceph
Dec 01 20:32:07 compute-0 ceph-mon[75880]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:07 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1895835347' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79"} : dispatch
Dec 01 20:32:07 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1895835347' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79"}]': finished
Dec 01 20:32:07 compute-0 ceph-mon[75880]: osdmap e5: 2 total, 0 up, 2 in
Dec 01 20:32:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:07 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2820532060' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 01 20:32:08 compute-0 intelligent_jones[83006]:  stderr: 2025-12-01T20:32:07.502+0000 7f7b917a88c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 01 20:32:08 compute-0 intelligent_jones[83006]:  stderr: 2025-12-01T20:32:07.526+0000 7f7b917a88c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 01 20:32:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:08 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c5b330ef-d0af-41ba-a172-fe530a921657
Dec 01 20:32:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c5b330ef-d0af-41ba-a172-fe530a921657"} v 0)
Dec 01 20:32:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1746975043' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "c5b330ef-d0af-41ba-a172-fe530a921657"} : dispatch
Dec 01 20:32:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 01 20:32:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1746975043' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5b330ef-d0af-41ba-a172-fe530a921657"}]': finished
Dec 01 20:32:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 01 20:32:08 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 01 20:32:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:08 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:08 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:08 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 01 20:32:09 compute-0 lvm[84972]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:32:09 compute-0 lvm[84972]: VG ceph_vg2 finished
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 01 20:32:09 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 01 20:32:09 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743790212' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 01 20:32:09 compute-0 intelligent_jones[83006]:  stderr: got monmap epoch 1
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: --> Creating keyring file for osd.2
Dec 01 20:32:09 compute-0 ceph-mon[75880]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:09 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1746975043' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "c5b330ef-d0af-41ba-a172-fe530a921657"} : dispatch
Dec 01 20:32:09 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1746975043' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5b330ef-d0af-41ba-a172-fe530a921657"}]': finished
Dec 01 20:32:09 compute-0 ceph-mon[75880]: osdmap e6: 3 total, 0 up, 3 in
Dec 01 20:32:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:09 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2743790212' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 01 20:32:09 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid c5b330ef-d0af-41ba-a172-fe530a921657 --setuser ceph --setgroup ceph
Dec 01 20:32:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:10 compute-0 intelligent_jones[83006]:  stderr: 2025-12-01T20:32:09.762+0000 7fb27e2b58c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec 01 20:32:10 compute-0 intelligent_jones[83006]:  stderr: 2025-12-01T20:32:09.786+0000 7fb27e2b58c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 01 20:32:10 compute-0 intelligent_jones[83006]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 01 20:32:10 compute-0 systemd[1]: libpod-41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1.scope: Deactivated successfully.
Dec 01 20:32:10 compute-0 systemd[1]: libpod-41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1.scope: Consumed 6.092s CPU time.
Dec 01 20:32:10 compute-0 podman[85884]: 2025-12-01 20:32:10.925537316 +0000 UTC m=+0.040398081 container died 41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jones, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 01 20:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c385be391ad061bc8847f0d40b6755e8cc2d5ecbc0929378cfaf86920160d283-merged.mount: Deactivated successfully.
Dec 01 20:32:10 compute-0 podman[85884]: 2025-12-01 20:32:10.968990609 +0000 UTC m=+0.083851374 container remove 41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:10 compute-0 systemd[1]: libpod-conmon-41601ebdd665c533b2042b664f327bc9324ff374bd2b957209425a7e66a0f3c1.scope: Deactivated successfully.
Dec 01 20:32:11 compute-0 sudo[82910]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:11 compute-0 sudo[85898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:11 compute-0 sudo[85898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:11 compute-0 sudo[85898]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:11 compute-0 sudo[85923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:32:11 compute-0 sudo[85923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:11 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:11 compute-0 podman[85958]: 2025-12-01 20:32:11.515472707 +0000 UTC m=+0.048110460 container create 7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wu, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:11 compute-0 systemd[1]: Started libpod-conmon-7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda.scope.
Dec 01 20:32:11 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:11 compute-0 podman[85958]: 2025-12-01 20:32:11.493593245 +0000 UTC m=+0.026231028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:11 compute-0 podman[85958]: 2025-12-01 20:32:11.59546966 +0000 UTC m=+0.128107413 container init 7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:32:11 compute-0 podman[85958]: 2025-12-01 20:32:11.601069344 +0000 UTC m=+0.133707077 container start 7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wu, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:32:11 compute-0 podman[85958]: 2025-12-01 20:32:11.6041712 +0000 UTC m=+0.136809013 container attach 7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wu, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:11 compute-0 beautiful_wu[85974]: 167 167
Dec 01 20:32:11 compute-0 systemd[1]: libpod-7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda.scope: Deactivated successfully.
Dec 01 20:32:11 compute-0 podman[85958]: 2025-12-01 20:32:11.606787563 +0000 UTC m=+0.139425316 container died 7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wu, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:32:11 compute-0 ceph-mon[75880]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4e91e9b0e382faee941afae88e2dedaa7f97d840cac817d006fe0aad9e798d1-merged.mount: Deactivated successfully.
Dec 01 20:32:11 compute-0 podman[85958]: 2025-12-01 20:32:11.648944845 +0000 UTC m=+0.181582588 container remove 7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:11 compute-0 systemd[1]: libpod-conmon-7877bafb0fcd585012d96cac4a1e46189960def7cd04d6682ec36c09fc916eda.scope: Deactivated successfully.
Dec 01 20:32:11 compute-0 podman[85996]: 2025-12-01 20:32:11.808443495 +0000 UTC m=+0.049021998 container create a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:11 compute-0 systemd[1]: Started libpod-conmon-a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d.scope.
Dec 01 20:32:11 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901f9a2b91d9cbd3e6a4a9a9875dbb8fbf34365fd6110fd3b5c97e6081ad2e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901f9a2b91d9cbd3e6a4a9a9875dbb8fbf34365fd6110fd3b5c97e6081ad2e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901f9a2b91d9cbd3e6a4a9a9875dbb8fbf34365fd6110fd3b5c97e6081ad2e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901f9a2b91d9cbd3e6a4a9a9875dbb8fbf34365fd6110fd3b5c97e6081ad2e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:11 compute-0 podman[85996]: 2025-12-01 20:32:11.787332198 +0000 UTC m=+0.027910751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:11 compute-0 podman[85996]: 2025-12-01 20:32:11.891405771 +0000 UTC m=+0.131984324 container init a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 20:32:11 compute-0 podman[85996]: 2025-12-01 20:32:11.902163276 +0000 UTC m=+0.142741819 container start a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:11 compute-0 podman[85996]: 2025-12-01 20:32:11.905381796 +0000 UTC m=+0.145960369 container attach a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]: {
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:     "0": [
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:         {
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "devices": [
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "/dev/loop3"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             ],
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_name": "ceph_lv0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_size": "21470642176",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "name": "ceph_lv0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "tags": {
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.crush_device_class": "",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.encrypted": "0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osd_id": "0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.type": "block",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.vdo": "0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.with_tpm": "0"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             },
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "type": "block",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "vg_name": "ceph_vg0"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:         }
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:     ],
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:     "1": [
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:         {
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "devices": [
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "/dev/loop4"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             ],
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_name": "ceph_lv1",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_size": "21470642176",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "name": "ceph_lv1",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "tags": {
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.crush_device_class": "",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.encrypted": "0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osd_id": "1",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.type": "block",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.vdo": "0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.with_tpm": "0"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             },
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "type": "block",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "vg_name": "ceph_vg1"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:         }
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:     ],
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:     "2": [
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:         {
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "devices": [
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "/dev/loop5"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             ],
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_name": "ceph_lv2",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_size": "21470642176",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "name": "ceph_lv2",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "tags": {
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.crush_device_class": "",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.encrypted": "0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osd_id": "2",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.type": "block",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.vdo": "0",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:                 "ceph.with_tpm": "0"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             },
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "type": "block",
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:             "vg_name": "ceph_vg2"
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:         }
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]:     ]
Dec 01 20:32:12 compute-0 intelligent_proskuriakova[86012]: }
Dec 01 20:32:12 compute-0 systemd[1]: libpod-a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d.scope: Deactivated successfully.
Dec 01 20:32:12 compute-0 podman[85996]: 2025-12-01 20:32:12.246955309 +0000 UTC m=+0.487533832 container died a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-901f9a2b91d9cbd3e6a4a9a9875dbb8fbf34365fd6110fd3b5c97e6081ad2e34-merged.mount: Deactivated successfully.
Dec 01 20:32:12 compute-0 podman[85996]: 2025-12-01 20:32:12.302571722 +0000 UTC m=+0.543150225 container remove a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:32:12 compute-0 systemd[1]: libpod-conmon-a340b4e063e4fd312cac0245aee146ae874876f3e7f2ab9bb5e6d868034da77d.scope: Deactivated successfully.
Dec 01 20:32:12 compute-0 sudo[85923]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 01 20:32:12 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 01 20:32:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:12 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:12 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 01 20:32:12 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 01 20:32:12 compute-0 sudo[86032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:12 compute-0 sudo[86032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:12 compute-0 sudo[86032]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:12 compute-0 sudo[86057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:32:12 compute-0 sudo[86057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 01 20:32:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:12 compute-0 podman[86122]: 2025-12-01 20:32:12.846520722 +0000 UTC m=+0.056067728 container create 7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:32:12 compute-0 systemd[1]: Started libpod-conmon-7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e.scope.
Dec 01 20:32:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:12 compute-0 podman[86122]: 2025-12-01 20:32:12.823347899 +0000 UTC m=+0.032894895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:12 compute-0 podman[86122]: 2025-12-01 20:32:12.923292604 +0000 UTC m=+0.132839600 container init 7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:32:12 compute-0 podman[86122]: 2025-12-01 20:32:12.934144132 +0000 UTC m=+0.143691108 container start 7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:32:12 compute-0 podman[86122]: 2025-12-01 20:32:12.937409054 +0000 UTC m=+0.146956060 container attach 7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:12 compute-0 gallant_bohr[86138]: 167 167
Dec 01 20:32:12 compute-0 systemd[1]: libpod-7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e.scope: Deactivated successfully.
Dec 01 20:32:12 compute-0 podman[86122]: 2025-12-01 20:32:12.940034605 +0000 UTC m=+0.149581581 container died 7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bohr, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c3bbd25ca63426fead4b612aca1a5631bd5e5698737060a5011aaf9e6906e98-merged.mount: Deactivated successfully.
Dec 01 20:32:12 compute-0 podman[86122]: 2025-12-01 20:32:12.974644824 +0000 UTC m=+0.184191800 container remove 7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bohr, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:32:12 compute-0 systemd[1]: libpod-conmon-7bcb4defcb3b5a07850a55bb5722edbe4229d3e4c1f3cf8490731c6544d7021e.scope: Deactivated successfully.
Dec 01 20:32:13 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:13 compute-0 podman[86166]: 2025-12-01 20:32:13.251932205 +0000 UTC m=+0.058332059 container create 37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:13 compute-0 systemd[1]: Started libpod-conmon-37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423.scope.
Dec 01 20:32:13 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7e6ecb3b9d9f15b0671648d0ec4bb39de1276a76475c97725eedd4bfdd8900/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7e6ecb3b9d9f15b0671648d0ec4bb39de1276a76475c97725eedd4bfdd8900/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7e6ecb3b9d9f15b0671648d0ec4bb39de1276a76475c97725eedd4bfdd8900/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7e6ecb3b9d9f15b0671648d0ec4bb39de1276a76475c97725eedd4bfdd8900/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7e6ecb3b9d9f15b0671648d0ec4bb39de1276a76475c97725eedd4bfdd8900/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:13 compute-0 podman[86166]: 2025-12-01 20:32:13.235491012 +0000 UTC m=+0.041890966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:13 compute-0 podman[86166]: 2025-12-01 20:32:13.330844254 +0000 UTC m=+0.137244108 container init 37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 01 20:32:13 compute-0 podman[86166]: 2025-12-01 20:32:13.336762718 +0000 UTC m=+0.143162582 container start 37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:32:13 compute-0 podman[86166]: 2025-12-01 20:32:13.339993848 +0000 UTC m=+0.146393702 container attach 37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:13 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test[86183]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 01 20:32:13 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test[86183]:                             [--no-systemd] [--no-tmpfs]
Dec 01 20:32:13 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test[86183]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 01 20:32:13 compute-0 systemd[1]: libpod-37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423.scope: Deactivated successfully.
Dec 01 20:32:13 compute-0 podman[86166]: 2025-12-01 20:32:13.510260914 +0000 UTC m=+0.316660798 container died 37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:32:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc7e6ecb3b9d9f15b0671648d0ec4bb39de1276a76475c97725eedd4bfdd8900-merged.mount: Deactivated successfully.
Dec 01 20:32:13 compute-0 podman[86166]: 2025-12-01 20:32:13.564327808 +0000 UTC m=+0.370727682 container remove 37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate-test, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:13 compute-0 systemd[1]: libpod-conmon-37699c381ec89b49fd82623319cf6a8e5155617b682331cc80997dfcd16c5423.scope: Deactivated successfully.
Dec 01 20:32:13 compute-0 ceph-mon[75880]: Deploying daemon osd.0 on compute-0
Dec 01 20:32:13 compute-0 ceph-mon[75880]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:13 compute-0 systemd[1]: Reloading.
Dec 01 20:32:13 compute-0 systemd-rc-local-generator[86246]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:32:13 compute-0 systemd-sysv-generator[86250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:32:14 compute-0 systemd[1]: Reloading.
Dec 01 20:32:14 compute-0 systemd-sysv-generator[86286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:32:14 compute-0 systemd-rc-local-generator[86281]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:32:14 compute-0 systemd[1]: Starting Ceph osd.0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:32:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:14 compute-0 podman[86341]: 2025-12-01 20:32:14.681944042 +0000 UTC m=+0.061489657 container create f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:14 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a9f6f786fbbc008d689fdc9a27dc8101b8ea90fb06c4653c39cde1dc67252c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a9f6f786fbbc008d689fdc9a27dc8101b8ea90fb06c4653c39cde1dc67252c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a9f6f786fbbc008d689fdc9a27dc8101b8ea90fb06c4653c39cde1dc67252c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a9f6f786fbbc008d689fdc9a27dc8101b8ea90fb06c4653c39cde1dc67252c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a9f6f786fbbc008d689fdc9a27dc8101b8ea90fb06c4653c39cde1dc67252c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:14 compute-0 podman[86341]: 2025-12-01 20:32:14.661889567 +0000 UTC m=+0.041435162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:14 compute-0 podman[86341]: 2025-12-01 20:32:14.766502857 +0000 UTC m=+0.146048442 container init f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 01 20:32:14 compute-0 podman[86341]: 2025-12-01 20:32:14.780591086 +0000 UTC m=+0.160136701 container start f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:14 compute-0 podman[86341]: 2025-12-01 20:32:14.784762105 +0000 UTC m=+0.164307690 container attach f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:32:14 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:14 compute-0 bash[86341]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:15 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:15 compute-0 lvm[86439]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:32:15 compute-0 lvm[86439]: VG ceph_vg0 finished
Dec 01 20:32:15 compute-0 lvm[86442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:32:15 compute-0 lvm[86442]: VG ceph_vg1 finished
Dec 01 20:32:15 compute-0 lvm[86444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:32:15 compute-0 lvm[86444]: VG ceph_vg2 finished
Dec 01 20:32:15 compute-0 ceph-mon[75880]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:15 compute-0 bash[86341]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 01 20:32:15 compute-0 bash[86341]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 01 20:32:15 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate[86356]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 01 20:32:15 compute-0 bash[86341]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 01 20:32:15 compute-0 systemd[1]: libpod-f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5.scope: Deactivated successfully.
Dec 01 20:32:15 compute-0 podman[86341]: 2025-12-01 20:32:15.896722822 +0000 UTC m=+1.276268397 container died f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 01 20:32:15 compute-0 systemd[1]: libpod-f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5.scope: Consumed 1.634s CPU time.
Dec 01 20:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-04a9f6f786fbbc008d689fdc9a27dc8101b8ea90fb06c4653c39cde1dc67252c-merged.mount: Deactivated successfully.
Dec 01 20:32:15 compute-0 podman[86341]: 2025-12-01 20:32:15.948695242 +0000 UTC m=+1.328240827 container remove f60e55a0cf27cb714fc9c455010f0814fb1166aad6dc39cef1f12d9a2810f7a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 01 20:32:16 compute-0 podman[86615]: 2025-12-01 20:32:16.144577276 +0000 UTC m=+0.040305448 container create 6923995d2d72e4f1bff03822b204714dfe20043445e2b2dccd6cdb7bc533440d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffb7a50e6bd6335300936585098dbf684cc59f8727cdeffc0c7f44c104eb16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffb7a50e6bd6335300936585098dbf684cc59f8727cdeffc0c7f44c104eb16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffb7a50e6bd6335300936585098dbf684cc59f8727cdeffc0c7f44c104eb16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffb7a50e6bd6335300936585098dbf684cc59f8727cdeffc0c7f44c104eb16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fffb7a50e6bd6335300936585098dbf684cc59f8727cdeffc0c7f44c104eb16/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:16 compute-0 podman[86615]: 2025-12-01 20:32:16.220635145 +0000 UTC m=+0.116363367 container init 6923995d2d72e4f1bff03822b204714dfe20043445e2b2dccd6cdb7bc533440d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:32:16 compute-0 podman[86615]: 2025-12-01 20:32:16.126706168 +0000 UTC m=+0.022434320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:16 compute-0 podman[86615]: 2025-12-01 20:32:16.231320078 +0000 UTC m=+0.127048250 container start 6923995d2d72e4f1bff03822b204714dfe20043445e2b2dccd6cdb7bc533440d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:32:16 compute-0 bash[86615]: 6923995d2d72e4f1bff03822b204714dfe20043445e2b2dccd6cdb7bc533440d
Dec 01 20:32:16 compute-0 systemd[1]: Started Ceph osd.0 for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:32:16 compute-0 ceph-osd[86634]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: pidfile_write: ignore empty --pid-file
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 sudo[86057]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 01 20:32:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 01 20:32:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:16 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:16 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 01 20:32:16 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 sudo[86650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674400 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 sudo[86650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:16 compute-0 sudo[86650]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e674000 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 01 20:32:16 compute-0 ceph-osd[86634]: load: jerasure load: lrc 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 sudo[86683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:32:16 compute-0 sudo[86683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691e675c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount shared_bdev_used = 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Git sha 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: DB SUMMARY
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: DB Session ID:  ZVOEEHKVJN509V9KA9P7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                     Options.env: 0x55691e505ea0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                Options.info_log: 0x55691f5608a0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                 Options.wal_dir: db.wal
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.write_buffer_manager: 0x55691f406b40
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.row_cache: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                              Options.wal_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.wal_compression: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_background_jobs: 4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Compression algorithms supported:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kZSTD supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e509a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e509a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e509a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bf78386b-c9c6-4b63-b1f5-502cf62b2204
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621136639404, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621136641456, "job": 1, "event": "recovery_finished"}
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: freelist init
Dec 01 20:32:16 compute-0 ceph-osd[86634]: freelist _read_cfg
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs umount
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) close
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bdev(0x55691f315800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluefs mount shared_bdev_used = 27262976
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Git sha 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: DB SUMMARY
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: DB Session ID:  ZVOEEHKVJN509V9KA9P6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                     Options.env: 0x55691e505ce0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                Options.info_log: 0x55691f560960
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                                 Options.wal_dir: db.wal
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.write_buffer_manager: 0x55691f406b40
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.row_cache: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                              Options.wal_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.wal_compression: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_background_jobs: 4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Compression algorithms supported:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kZSTD supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f560bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e5098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f5610c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e509a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f5610c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e509a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55691f5610c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55691e509a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bf78386b-c9c6-4b63-b1f5-502cf62b2204
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621136689986, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621136694147, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf78386b-c9c6-4b63-b1f5-502cf62b2204", "db_session_id": "ZVOEEHKVJN509V9KA9P6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621136697396, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf78386b-c9c6-4b63-b1f5-502cf62b2204", "db_session_id": "ZVOEEHKVJN509V9KA9P6", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621136699278, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621136, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bf78386b-c9c6-4b63-b1f5-502cf62b2204", "db_session_id": "ZVOEEHKVJN509V9KA9P6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621136700383, "job": 1, "event": "recovery_finished"}
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55691f768000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: DB pointer 0x55691f71a000
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 01 20:32:16 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:32:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:32:16 compute-0 ceph-osd[86634]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 01 20:32:16 compute-0 ceph-osd[86634]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 01 20:32:16 compute-0 ceph-osd[86634]: _get_class not permitted to load lua
Dec 01 20:32:16 compute-0 ceph-osd[86634]: _get_class not permitted to load sdk
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0 0 load_pgs
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0 0 load_pgs opened 0 pgs
Dec 01 20:32:16 compute-0 ceph-osd[86634]: osd.0 0 log_to_monitors true
Dec 01 20:32:16 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0[86630]: 2025-12-01T20:32:16.730+0000 7f080baa98c0 -1 osd.0 0 log_to_monitors true
Dec 01 20:32:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 01 20:32:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 01 20:32:16 compute-0 podman[87180]: 2025-12-01 20:32:16.929302126 +0000 UTC m=+0.052887329 container create 48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:32:16 compute-0 systemd[1]: Started libpod-conmon-48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea.scope.
Dec 01 20:32:16 compute-0 podman[87180]: 2025-12-01 20:32:16.900423707 +0000 UTC m=+0.024008960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:17 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:17 compute-0 podman[87180]: 2025-12-01 20:32:17.021053185 +0000 UTC m=+0.144638358 container init 48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 01 20:32:17 compute-0 podman[87180]: 2025-12-01 20:32:17.027487895 +0000 UTC m=+0.151073068 container start 48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_nightingale, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 01 20:32:17 compute-0 mystifying_nightingale[87197]: 167 167
Dec 01 20:32:17 compute-0 systemd[1]: libpod-48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea.scope: Deactivated successfully.
Dec 01 20:32:17 compute-0 podman[87180]: 2025-12-01 20:32:17.034420712 +0000 UTC m=+0.158005985 container attach 48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_nightingale, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:17 compute-0 podman[87180]: 2025-12-01 20:32:17.03499284 +0000 UTC m=+0.158578033 container died 48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_nightingale, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9114f8100285e8ecfec81905bd2b4293eee5ffb83a948bde0788010790b0a85c-merged.mount: Deactivated successfully.
Dec 01 20:32:17 compute-0 podman[87180]: 2025-12-01 20:32:17.075398938 +0000 UTC m=+0.198984111 container remove 48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:17 compute-0 systemd[1]: libpod-conmon-48ba7d0901997a4569f974159d2f1e0b9a6d31ef9434c21238b54715f74a08ea.scope: Deactivated successfully.
Dec 01 20:32:17 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 01 20:32:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:17 compute-0 ceph-mon[75880]: Deploying daemon osd.1 on compute-0
Dec 01 20:32:17 compute-0 ceph-mon[75880]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:17 compute-0 ceph-mon[75880]: from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 01 20:32:17 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 01 20:32:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:17 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:17 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:17 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:17 compute-0 podman[87226]: 2025-12-01 20:32:17.385910933 +0000 UTC m=+0.056095459 container create 04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:17 compute-0 systemd[1]: Started libpod-conmon-04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3.scope.
Dec 01 20:32:17 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f473767d7adb0ac40bc84002c6a55fa4584fac211e126ef6d537be6139aeca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:17 compute-0 podman[87226]: 2025-12-01 20:32:17.3655591 +0000 UTC m=+0.035743606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f473767d7adb0ac40bc84002c6a55fa4584fac211e126ef6d537be6139aeca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f473767d7adb0ac40bc84002c6a55fa4584fac211e126ef6d537be6139aeca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f473767d7adb0ac40bc84002c6a55fa4584fac211e126ef6d537be6139aeca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f473767d7adb0ac40bc84002c6a55fa4584fac211e126ef6d537be6139aeca/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:17 compute-0 podman[87226]: 2025-12-01 20:32:17.478519709 +0000 UTC m=+0.148704265 container init 04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 01 20:32:17 compute-0 podman[87226]: 2025-12-01 20:32:17.496696616 +0000 UTC m=+0.166881112 container start 04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:17 compute-0 podman[87226]: 2025-12-01 20:32:17.501574837 +0000 UTC m=+0.171759423 container attach 04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:32:17 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test[87242]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 01 20:32:17 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test[87242]:                             [--no-systemd] [--no-tmpfs]
Dec 01 20:32:17 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test[87242]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 01 20:32:17 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 01 20:32:17 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 01 20:32:17 compute-0 systemd[1]: libpod-04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3.scope: Deactivated successfully.
Dec 01 20:32:17 compute-0 podman[87226]: 2025-12-01 20:32:17.700902858 +0000 UTC m=+0.371087374 container died 04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f473767d7adb0ac40bc84002c6a55fa4584fac211e126ef6d537be6139aeca-merged.mount: Deactivated successfully.
Dec 01 20:32:17 compute-0 podman[87226]: 2025-12-01 20:32:17.756366726 +0000 UTC m=+0.426551202 container remove 04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:17 compute-0 systemd[1]: libpod-conmon-04ff8908dcc7bf3437a468fbb238d6f806202a0b649477a1132570051e1260f3.scope: Deactivated successfully.
Dec 01 20:32:18 compute-0 systemd[1]: Reloading.
Dec 01 20:32:18 compute-0 systemd-rc-local-generator[87302]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:32:18 compute-0 systemd-sysv-generator[87305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:32:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 01 20:32:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:18 compute-0 systemd[1]: Reloading.
Dec 01 20:32:18 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 01 20:32:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 01 20:32:18 compute-0 ceph-osd[86634]: osd.0 0 done with init, starting boot process
Dec 01 20:32:18 compute-0 ceph-osd[86634]: osd.0 0 start_boot
Dec 01 20:32:18 compute-0 ceph-osd[86634]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 01 20:32:18 compute-0 ceph-osd[86634]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 01 20:32:18 compute-0 ceph-osd[86634]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 01 20:32:18 compute-0 ceph-osd[86634]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 01 20:32:18 compute-0 ceph-osd[86634]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 01 20:32:18 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 01 20:32:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:18 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:18 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:18 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:18 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:18 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:18 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:18 compute-0 ceph-mon[75880]: from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 01 20:32:18 compute-0 ceph-mon[75880]: osdmap e7: 3 total, 0 up, 3 in
Dec 01 20:32:18 compute-0 ceph-mon[75880]: from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 01 20:32:18 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:18 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:18 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:18 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1300158023; not ready for session (expect reconnect)
Dec 01 20:32:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:18 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:18 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:18 compute-0 systemd-sysv-generator[87340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:32:18 compute-0 systemd-rc-local-generator[87336]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:32:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:18 compute-0 systemd[1]: Starting Ceph osd.1 for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:32:18 compute-0 podman[87400]: 2025-12-01 20:32:18.901363613 +0000 UTC m=+0.041800944 container create 6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:18 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09169f3e00e040a8329f9f268e9b4d32c43203599125dce0bce39b05b1fabab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09169f3e00e040a8329f9f268e9b4d32c43203599125dce0bce39b05b1fabab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09169f3e00e040a8329f9f268e9b4d32c43203599125dce0bce39b05b1fabab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09169f3e00e040a8329f9f268e9b4d32c43203599125dce0bce39b05b1fabab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09169f3e00e040a8329f9f268e9b4d32c43203599125dce0bce39b05b1fabab/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:18 compute-0 podman[87400]: 2025-12-01 20:32:18.884453306 +0000 UTC m=+0.024890667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:18 compute-0 podman[87400]: 2025-12-01 20:32:18.987973491 +0000 UTC m=+0.128410842 container init 6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:32:18 compute-0 podman[87400]: 2025-12-01 20:32:18.995598479 +0000 UTC m=+0.136035830 container start 6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:19 compute-0 podman[87400]: 2025-12-01 20:32:19.015036174 +0000 UTC m=+0.155473505 container attach 6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:19 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 bash[87400]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 bash[87400]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:19 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1300158023; not ready for session (expect reconnect)
Dec 01 20:32:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:19 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:19 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:19 compute-0 ceph-mon[75880]: from='osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 01 20:32:19 compute-0 ceph-mon[75880]: osdmap e8: 3 total, 0 up, 3 in
Dec 01 20:32:19 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:19 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:19 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:19 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:19 compute-0 ceph-mon[75880]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:19 compute-0 lvm[87501]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:32:19 compute-0 lvm[87502]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:32:19 compute-0 lvm[87502]: VG ceph_vg1 finished
Dec 01 20:32:19 compute-0 lvm[87501]: VG ceph_vg0 finished
Dec 01 20:32:19 compute-0 lvm[87504]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:32:19 compute-0 lvm[87504]: VG ceph_vg2 finished
Dec 01 20:32:19 compute-0 lvm[87505]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:32:19 compute-0 lvm[87505]: VG ceph_vg2 finished
Dec 01 20:32:19 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 01 20:32:19 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 bash[87400]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 01 20:32:19 compute-0 bash[87400]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 bash[87400]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:19 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 01 20:32:19 compute-0 bash[87400]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 01 20:32:19 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 01 20:32:19 compute-0 bash[87400]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 01 20:32:20 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 bash[87400]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 bash[87400]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 01 20:32:20 compute-0 bash[87400]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 01 20:32:20 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 01 20:32:20 compute-0 bash[87400]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 01 20:32:20 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate[87416]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 01 20:32:20 compute-0 bash[87400]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 01 20:32:20 compute-0 systemd[1]: libpod-6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87.scope: Deactivated successfully.
Dec 01 20:32:20 compute-0 systemd[1]: libpod-6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87.scope: Consumed 1.488s CPU time.
Dec 01 20:32:20 compute-0 podman[87618]: 2025-12-01 20:32:20.115600626 +0000 UTC m=+0.031374578 container died 6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f09169f3e00e040a8329f9f268e9b4d32c43203599125dce0bce39b05b1fabab-merged.mount: Deactivated successfully.
Dec 01 20:32:20 compute-0 podman[87618]: 2025-12-01 20:32:20.277858652 +0000 UTC m=+0.193632604 container remove 6a81c968ede64bcaaca0f22c2e9bede25de97f1ccc0f9e4243142a3589e7bf87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:32:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:20 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:20 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1300158023; not ready for session (expect reconnect)
Dec 01 20:32:20 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:20 compute-0 ceph-mon[75880]: purged_snaps scrub starts
Dec 01 20:32:20 compute-0 ceph-mon[75880]: purged_snaps scrub ok
Dec 01 20:32:20 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:20 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:20 compute-0 podman[87675]: 2025-12-01 20:32:20.537934196 +0000 UTC m=+0.063995955 container create b356bc457babb2be2b792af59f58cfbb26c0ff35ec1fe4428bc5c9ac1a42f9e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:32:20 compute-0 podman[87675]: 2025-12-01 20:32:20.501882692 +0000 UTC m=+0.027944501 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c3b5cb86bef130ec977540fbe5d3c652e29c493e3261f7b9e45ce3289140d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c3b5cb86bef130ec977540fbe5d3c652e29c493e3261f7b9e45ce3289140d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c3b5cb86bef130ec977540fbe5d3c652e29c493e3261f7b9e45ce3289140d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c3b5cb86bef130ec977540fbe5d3c652e29c493e3261f7b9e45ce3289140d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2c3b5cb86bef130ec977540fbe5d3c652e29c493e3261f7b9e45ce3289140d6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:20 compute-0 podman[87675]: 2025-12-01 20:32:20.638399836 +0000 UTC m=+0.164461605 container init b356bc457babb2be2b792af59f58cfbb26c0ff35ec1fe4428bc5c9ac1a42f9e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:20 compute-0 podman[87675]: 2025-12-01 20:32:20.649648866 +0000 UTC m=+0.175710615 container start b356bc457babb2be2b792af59f58cfbb26c0ff35ec1fe4428bc5c9ac1a42f9e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 20:32:20 compute-0 bash[87675]: b356bc457babb2be2b792af59f58cfbb26c0ff35ec1fe4428bc5c9ac1a42f9e0
Dec 01 20:32:20 compute-0 systemd[1]: Started Ceph osd.1 for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:32:20 compute-0 ceph-osd[87692]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:32:20 compute-0 ceph-osd[87692]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 01 20:32:20 compute-0 ceph-osd[87692]: pidfile_write: ignore empty --pid-file
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 sudo[86683]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 01 20:32:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 01 20:32:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:20 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:20 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 01 20:32:20 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 sudo[87710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 sudo[87710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74400 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 sudo[87710]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f74000 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 ceph-osd[87692]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 01 20:32:20 compute-0 ceph-osd[87692]: load: jerasure load: lrc 
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 sudo[87740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:32:20 compute-0 sudo[87740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 ceph-osd[87692]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 01 20:32:20 compute-0 ceph-osd[87692]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:20 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563145f75c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount shared_bdev_used = 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Git sha 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: DB SUMMARY
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: DB Session ID:  56MWSW7T5HS3B1M9OY6Q
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                     Options.env: 0x563145e05ea0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                Options.info_log: 0x563146e568a0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                 Options.wal_dir: db.wal
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.write_buffer_manager: 0x563145e6ab40
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.row_cache: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                              Options.wal_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.wal_compression: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_background_jobs: 4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Compression algorithms supported:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kZSTD supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e09a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e09a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e09a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 68c288c3-1778-4d36-b3e6-14f61297bbf4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621141066124, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621141067561, "job": 1, "event": "recovery_finished"}
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: freelist init
Dec 01 20:32:21 compute-0 ceph-osd[87692]: freelist _read_cfg
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs umount
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) close
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bdev(0x563146c0b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluefs mount shared_bdev_used = 27262976
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Git sha 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: DB SUMMARY
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: DB Session ID:  56MWSW7T5HS3B1M9OY6R
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                     Options.env: 0x563147026a80
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                Options.info_log: 0x563146e56a20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                                 Options.wal_dir: db.wal
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.write_buffer_manager: 0x563145e6b900
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.row_cache: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                              Options.wal_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.wal_compression: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_background_jobs: 4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Compression algorithms supported:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kZSTD supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e56bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e098d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e570c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e09a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e570c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e09a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563146e570c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x563145e09a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 68c288c3-1778-4d36-b3e6-14f61297bbf4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621141131485, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621141150369, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68c288c3-1778-4d36-b3e6-14f61297bbf4", "db_session_id": "56MWSW7T5HS3B1M9OY6R", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621141176323, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68c288c3-1778-4d36-b3e6-14f61297bbf4", "db_session_id": "56MWSW7T5HS3B1M9OY6R", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621141183059, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68c288c3-1778-4d36-b3e6-14f61297bbf4", "db_session_id": "56MWSW7T5HS3B1M9OY6R", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621141210754, "job": 1, "event": "recovery_finished"}
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 01 20:32:21 compute-0 ceph-mgr[76174]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563147070000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: DB pointer 0x563147010000
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 01 20:32:21 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:32:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:32:21 compute-0 ceph-osd[87692]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 01 20:32:21 compute-0 ceph-osd[87692]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 01 20:32:21 compute-0 ceph-osd[87692]: _get_class not permitted to load lua
Dec 01 20:32:21 compute-0 ceph-osd[87692]: _get_class not permitted to load sdk
Dec 01 20:32:21 compute-0 ceph-osd[87692]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 01 20:32:21 compute-0 ceph-osd[87692]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 01 20:32:21 compute-0 ceph-osd[87692]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 01 20:32:21 compute-0 ceph-osd[87692]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 01 20:32:21 compute-0 ceph-osd[87692]: osd.1 0 load_pgs
Dec 01 20:32:21 compute-0 ceph-osd[87692]: osd.1 0 load_pgs opened 0 pgs
Dec 01 20:32:21 compute-0 ceph-osd[87692]: osd.1 0 log_to_monitors true
Dec 01 20:32:21 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1[87688]: 2025-12-01T20:32:21.303+0000 7f40f46c88c0 -1 osd.1 0 log_to_monitors true
Dec 01 20:32:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 01 20:32:21 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 01 20:32:21 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1300158023; not ready for session (expect reconnect)
Dec 01 20:32:21 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:21 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:21 compute-0 podman[88202]: 2025-12-01 20:32:21.333757422 +0000 UTC m=+0.027769106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:21 compute-0 podman[88202]: 2025-12-01 20:32:21.610519016 +0000 UTC m=+0.304530630 container create 33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_shockley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:21 compute-0 systemd[1]: Started libpod-conmon-33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b.scope.
Dec 01 20:32:21 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 01 20:32:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:21 compute-0 ceph-mon[75880]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:21 compute-0 podman[88202]: 2025-12-01 20:32:21.991742304 +0000 UTC m=+0.685753938 container init 33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 20:32:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 01 20:32:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:21 compute-0 ceph-mon[75880]: Deploying daemon osd.2 on compute-0
Dec 01 20:32:21 compute-0 ceph-mon[75880]: from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 01 20:32:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:22 compute-0 podman[88202]: 2025-12-01 20:32:22.003495521 +0000 UTC m=+0.697507165 container start 33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_shockley, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:32:22 compute-0 xenodochial_shockley[88252]: 167 167
Dec 01 20:32:22 compute-0 systemd[1]: libpod-33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b.scope: Deactivated successfully.
Dec 01 20:32:22 compute-0 podman[88202]: 2025-12-01 20:32:22.017723134 +0000 UTC m=+0.711734778 container attach 33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_shockley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:32:22 compute-0 podman[88202]: 2025-12-01 20:32:22.019998625 +0000 UTC m=+0.714010229 container died 33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_shockley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:32:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 01 20:32:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Dec 01 20:32:22 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Dec 01 20:32:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 01 20:32:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 01 20:32:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 01 20:32:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:22 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:22 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:22 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-073051cf5850a954710668a33e4d26b2fbb4a9382dd8b90398363d1635f81f43-merged.mount: Deactivated successfully.
Dec 01 20:32:22 compute-0 podman[88202]: 2025-12-01 20:32:22.13310742 +0000 UTC m=+0.827119024 container remove 33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_shockley, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:22 compute-0 systemd[1]: libpod-conmon-33e8a00ccc8ddb34a97a94628d685cfce9248d111170ed8c95842695449e157b.scope: Deactivated successfully.
Dec 01 20:32:22 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 01 20:32:22 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 01 20:32:22 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1300158023; not ready for session (expect reconnect)
Dec 01 20:32:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:22 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 01 20:32:22 compute-0 podman[88285]: 2025-12-01 20:32:22.405322441 +0000 UTC m=+0.048295595 container create 709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 01 20:32:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.201 iops: 8243.496 elapsed_sec: 0.364
Dec 01 20:32:22 compute-0 ceph-osd[86634]: log_channel(cluster) log [WRN] : OSD bench result of 8243.495513 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 0 waiting for initial osdmap
Dec 01 20:32:22 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0[86630]: 2025-12-01T20:32:22.434+0000 7f0807a2b640 -1 osd.0 0 waiting for initial osdmap
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Dec 01 20:32:22 compute-0 systemd[1]: Started libpod-conmon-709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891.scope.
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 9 set_numa_affinity not setting numa affinity
Dec 01 20:32:22 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-0[86630]: 2025-12-01T20:32:22.461+0000 7f0802830640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 01 20:32:22 compute-0 ceph-osd[86634]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 01 20:32:22 compute-0 podman[88285]: 2025-12-01 20:32:22.384092249 +0000 UTC m=+0.027065413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:22 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5926368bdba7d18792763c55ccdcc4e79cfc3d6cda20d4338af596accf5a2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5926368bdba7d18792763c55ccdcc4e79cfc3d6cda20d4338af596accf5a2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5926368bdba7d18792763c55ccdcc4e79cfc3d6cda20d4338af596accf5a2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5926368bdba7d18792763c55ccdcc4e79cfc3d6cda20d4338af596accf5a2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5926368bdba7d18792763c55ccdcc4e79cfc3d6cda20d4338af596accf5a2d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:22 compute-0 podman[88285]: 2025-12-01 20:32:22.516699852 +0000 UTC m=+0.159673036 container init 709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:22 compute-0 podman[88285]: 2025-12-01 20:32:22.526393843 +0000 UTC m=+0.169366997 container start 709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:32:22 compute-0 podman[88285]: 2025-12-01 20:32:22.530881643 +0000 UTC m=+0.173854797 container attach 709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 01 20:32:22 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test[88301]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 01 20:32:22 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test[88301]:                             [--no-systemd] [--no-tmpfs]
Dec 01 20:32:22 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test[88301]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 01 20:32:22 compute-0 systemd[1]: libpod-709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891.scope: Deactivated successfully.
Dec 01 20:32:22 compute-0 podman[88285]: 2025-12-01 20:32:22.727777848 +0000 UTC m=+0.370751002 container died 709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff5926368bdba7d18792763c55ccdcc4e79cfc3d6cda20d4338af596accf5a2d-merged.mount: Deactivated successfully.
Dec 01 20:32:22 compute-0 podman[88285]: 2025-12-01 20:32:22.773057659 +0000 UTC m=+0.416030803 container remove 709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:22 compute-0 systemd[1]: libpod-conmon-709da6980633167ea19a11667c74fb5d1500bc350d37ab57c2204044f21cb891.scope: Deactivated successfully.
Dec 01 20:32:23 compute-0 systemd[1]: Reloading.
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec 01 20:32:23 compute-0 ceph-osd[87692]: osd.1 0 done with init, starting boot process
Dec 01 20:32:23 compute-0 ceph-osd[87692]: osd.1 0 start_boot
Dec 01 20:32:23 compute-0 ceph-osd[87692]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 01 20:32:23 compute-0 ceph-osd[87692]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 01 20:32:23 compute-0 ceph-osd[87692]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 01 20:32:23 compute-0 ceph-osd[87692]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 01 20:32:23 compute-0 ceph-osd[87692]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023] boot
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec 01 20:32:23 compute-0 ceph-osd[86634]: osd.0 10 state: booting -> active
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:23 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:23 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:23 compute-0 ceph-mon[75880]: from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 01 20:32:23 compute-0 ceph-mon[75880]: osdmap e9: 3 total, 0 up, 3 in
Dec 01 20:32:23 compute-0 ceph-mon[75880]: from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 01 20:32:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:23 compute-0 ceph-mon[75880]: OSD bench result of 8243.495513 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 01 20:32:23 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/454414153; not ready for session (expect reconnect)
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:23 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:23 compute-0 systemd-rc-local-generator[88362]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:32:23 compute-0 systemd-sysv-generator[88365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:32:23 compute-0 ceph-mgr[76174]: [devicehealth INFO root] creating mgr pool
Dec 01 20:32:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 01 20:32:23 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 01 20:32:23 compute-0 systemd[1]: Reloading.
Dec 01 20:32:23 compute-0 systemd-rc-local-generator[88403]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:32:23 compute-0 systemd-sysv-generator[88406]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:32:23 compute-0 systemd[1]: Starting Ceph osd.2 for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:32:23 compute-0 podman[88459]: 2025-12-01 20:32:23.946248134 +0000 UTC m=+0.078336682 container create 826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:23 compute-0 podman[88459]: 2025-12-01 20:32:23.894876623 +0000 UTC m=+0.026965221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:24 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60a2fb3d436405fa0949921402d163230b2934ee0ee7e92102d362c9ab18cace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60a2fb3d436405fa0949921402d163230b2934ee0ee7e92102d362c9ab18cace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60a2fb3d436405fa0949921402d163230b2934ee0ee7e92102d362c9ab18cace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60a2fb3d436405fa0949921402d163230b2934ee0ee7e92102d362c9ab18cace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60a2fb3d436405fa0949921402d163230b2934ee0ee7e92102d362c9ab18cace/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 01 20:32:24 compute-0 podman[88459]: 2025-12-01 20:32:24.074525981 +0000 UTC m=+0.206614619 container init 826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:24 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/454414153; not ready for session (expect reconnect)
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:24 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:24 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:24 compute-0 podman[88459]: 2025-12-01 20:32:24.085678678 +0000 UTC m=+0.217767266 container start 826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:24 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 01 20:32:24 compute-0 podman[88459]: 2025-12-01 20:32:24.107586601 +0000 UTC m=+0.239675199 container attach 826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:24 compute-0 ceph-osd[86634]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 01 20:32:24 compute-0 ceph-osd[86634]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 01 20:32:24 compute-0 ceph-osd[86634]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 01 20:32:24 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:24 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:24 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:24 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:24 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 01 20:32:24 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 01 20:32:24 compute-0 ceph-mon[75880]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 01 20:32:24 compute-0 ceph-mon[75880]: from='osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 01 20:32:24 compute-0 ceph-mon[75880]: osd.0 [v2:192.168.122.100:6802/1300158023,v1:192.168.122.100:6803/1300158023] boot
Dec 01 20:32:24 compute-0 ceph-mon[75880]: osdmap e10: 3 total, 1 up, 3 in
Dec 01 20:32:24 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 01 20:32:24 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:24 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:24 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:24 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 01 20:32:24 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:24 compute-0 bash[88459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:24 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:24 compute-0 bash[88459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v28: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 01 20:32:24 compute-0 lvm[88558]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:32:24 compute-0 lvm[88558]: VG ceph_vg0 finished
Dec 01 20:32:24 compute-0 lvm[88561]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:32:24 compute-0 lvm[88561]: VG ceph_vg1 finished
Dec 01 20:32:24 compute-0 lvm[88563]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:32:24 compute-0 lvm[88563]: VG ceph_vg2 finished
Dec 01 20:32:24 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 01 20:32:24 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:24 compute-0 bash[88459]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 01 20:32:24 compute-0 bash[88459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:25 compute-0 bash[88459]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 01 20:32:25 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/454414153; not ready for session (expect reconnect)
Dec 01 20:32:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:25 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:25 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 01 20:32:25 compute-0 bash[88459]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 01 20:32:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 01 20:32:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 01 20:32:25 compute-0 bash[88459]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 01 20:32:25 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec 01 20:32:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:25 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:25 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:25 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:25 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:25 compute-0 ceph-mon[75880]: purged_snaps scrub starts
Dec 01 20:32:25 compute-0 ceph-mon[75880]: purged_snaps scrub ok
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 01 20:32:25 compute-0 ceph-mon[75880]: osdmap e11: 3 total, 1 up, 3 in
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 01 20:32:25 compute-0 ceph-mon[75880]: pgmap v28: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 01 20:32:25 compute-0 ceph-mon[75880]: osdmap e12: 3 total, 1 up, 3 in
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:25 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 bash[88459]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 bash[88459]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 01 20:32:25 compute-0 bash[88459]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 01 20:32:25 compute-0 bash[88459]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 01 20:32:25 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate[88475]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 01 20:32:25 compute-0 bash[88459]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 01 20:32:25 compute-0 systemd[1]: libpod-826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6.scope: Deactivated successfully.
Dec 01 20:32:25 compute-0 systemd[1]: libpod-826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6.scope: Consumed 1.653s CPU time.
Dec 01 20:32:25 compute-0 podman[88459]: 2025-12-01 20:32:25.248945524 +0000 UTC m=+1.381034082 container died 826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-60a2fb3d436405fa0949921402d163230b2934ee0ee7e92102d362c9ab18cace-merged.mount: Deactivated successfully.
Dec 01 20:32:25 compute-0 podman[88459]: 2025-12-01 20:32:25.398203774 +0000 UTC m=+1.530292322 container remove 826cea15500acc779666b40fe4f78b8494f70ec6d50df1c88f3237793ac78da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2-activate, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:32:25 compute-0 podman[88726]: 2025-12-01 20:32:25.651073464 +0000 UTC m=+0.044706264 container create ea37d4c8b3d82996ee42a36345e5efb096c8e38380b134e1b9592d6ac4703eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5118ad3e274e489908df517d1fa5f812f18b473f9a2f4fd7d6f08cc59cdd40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5118ad3e274e489908df517d1fa5f812f18b473f9a2f4fd7d6f08cc59cdd40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5118ad3e274e489908df517d1fa5f812f18b473f9a2f4fd7d6f08cc59cdd40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5118ad3e274e489908df517d1fa5f812f18b473f9a2f4fd7d6f08cc59cdd40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5118ad3e274e489908df517d1fa5f812f18b473f9a2f4fd7d6f08cc59cdd40/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:25 compute-0 podman[88726]: 2025-12-01 20:32:25.630811372 +0000 UTC m=+0.024444192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:25 compute-0 podman[88726]: 2025-12-01 20:32:25.751683298 +0000 UTC m=+0.145316188 container init ea37d4c8b3d82996ee42a36345e5efb096c8e38380b134e1b9592d6ac4703eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 20:32:25 compute-0 podman[88726]: 2025-12-01 20:32:25.763008941 +0000 UTC m=+0.156641741 container start ea37d4c8b3d82996ee42a36345e5efb096c8e38380b134e1b9592d6ac4703eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:25 compute-0 bash[88726]: ea37d4c8b3d82996ee42a36345e5efb096c8e38380b134e1b9592d6ac4703eef
Dec 01 20:32:25 compute-0 systemd[1]: Started Ceph osd.2 for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:32:25 compute-0 ceph-osd[88745]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:32:25 compute-0 ceph-osd[88745]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 01 20:32:25 compute-0 ceph-osd[88745]: pidfile_write: ignore empty --pid-file
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:25 compute-0 sudo[87740]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a400 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:25 compute-0 sudo[88763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:25 compute-0 sudo[88763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:25 compute-0 sudo[88763]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:25 compute-0 ceph-osd[88745]: bdev(0x55b34473a000 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:26 compute-0 ceph-osd[88745]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 01 20:32:26 compute-0 ceph-osd[88745]: load: jerasure load: lrc 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:26 compute-0 sudo[88793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:32:26 compute-0 sudo[88793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:26 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/454414153; not ready for session (expect reconnect)
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:26 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b34473bc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount shared_bdev_used = 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Git sha 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: DB SUMMARY
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: DB Session ID:  WP0LTH2ZNATSQG20I45W
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                     Options.env: 0x55b3445cbe30
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                Options.info_log: 0x55b34561c880
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                 Options.wal_dir: db.wal
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.write_buffer_manager: 0x55b344630b40
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.row_cache: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                              Options.wal_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.wal_compression: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_background_jobs: 4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Compression algorithms supported:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kZSTD supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 31787a60-2050-4ceb-b8af-9c600a06ec77
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621146198873, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621146200971, "job": 1, "event": "recovery_finished"}
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: freelist init
Dec 01 20:32:26 compute-0 ceph-osd[88745]: freelist _read_cfg
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs umount
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) close
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bdev(0x55b3453d1800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluefs mount shared_bdev_used = 27262976
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: RocksDB version: 7.9.2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Git sha 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: DB SUMMARY
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: DB Session ID:  WP0LTH2ZNATSQG20I45X
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: CURRENT file:  CURRENT
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: IDENTITY file:  IDENTITY
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.error_if_exists: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.create_if_missing: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.paranoid_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                     Options.env: 0x55b3457eca10
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                Options.info_log: 0x55b34561ca00
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_file_opening_threads: 16
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                              Options.statistics: (nil)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.use_fsync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.max_log_file_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.allow_fallocate: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.use_direct_reads: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.create_missing_column_families: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                              Options.db_log_dir: 
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                                 Options.wal_dir: db.wal
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.advise_random_on_open: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.write_buffer_manager: 0x55b344631900
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                            Options.rate_limiter: (nil)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.unordered_write: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.row_cache: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                              Options.wal_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.allow_ingest_behind: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.two_write_queues: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.manual_wal_flush: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.wal_compression: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.atomic_flush: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.log_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.allow_data_in_errors: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.db_host_id: __hostname__
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_background_jobs: 4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_background_compactions: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_subcompactions: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.max_open_files: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.bytes_per_sync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.max_background_flushes: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Compression algorithms supported:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kZSTD supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kXpressCompression supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kBZip2Compression supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kLZ4Compression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kZlibCompression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kLZ4HCCompression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         kSnappyCompression supported: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561cba0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561d0a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561d0a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:           Options.merge_operator: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.compaction_filter_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.sst_partitioner_factory: None
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b34561d0a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b3445cfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.write_buffer_size: 16777216
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.max_write_buffer_number: 64
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.compression: LZ4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.num_levels: 7
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.level: 32767
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.compression_opts.strategy: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                  Options.compression_opts.enabled: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.arena_block_size: 1048576
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.disable_auto_compactions: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.inplace_update_support: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.bloom_locality: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                    Options.max_successive_merges: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.paranoid_file_checks: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.force_consistency_checks: 1
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.report_bg_io_stats: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                               Options.ttl: 2592000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                       Options.enable_blob_files: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                           Options.min_blob_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                          Options.blob_file_size: 268435456
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb:                Options.blob_file_starting_level: 0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 31787a60-2050-4ceb-b8af-9c600a06ec77
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621146266453, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621146284244, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621146, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "31787a60-2050-4ceb-b8af-9c600a06ec77", "db_session_id": "WP0LTH2ZNATSQG20I45X", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621146287792, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621146, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "31787a60-2050-4ceb-b8af-9c600a06ec77", "db_session_id": "WP0LTH2ZNATSQG20I45X", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621146312958, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621146, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "31787a60-2050-4ceb-b8af-9c600a06ec77", "db_session_id": "WP0LTH2ZNATSQG20I45X", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621146316272, "job": 1, "event": "recovery_finished"}
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 01 20:32:26 compute-0 podman[89226]: 2025-12-01 20:32:26.349163245 +0000 UTC m=+0.051527297 container create 153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b345836000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: DB pointer 0x55b3457d6000
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 01 20:32:26 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:32:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:32:26 compute-0 ceph-osd[88745]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 01 20:32:26 compute-0 ceph-osd[88745]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 01 20:32:26 compute-0 ceph-osd[88745]: _get_class not permitted to load lua
Dec 01 20:32:26 compute-0 ceph-osd[88745]: _get_class not permitted to load sdk
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2 0 load_pgs
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2 0 load_pgs opened 0 pgs
Dec 01 20:32:26 compute-0 ceph-osd[88745]: osd.2 0 log_to_monitors true
Dec 01 20:32:26 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2[88741]: 2025-12-01T20:32:26.397+0000 7f6f232fb8c0 -1 osd.2 0 log_to_monitors true
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 01 20:32:26 compute-0 systemd[1]: Started libpod-conmon-153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9.scope.
Dec 01 20:32:26 compute-0 podman[89226]: 2025-12-01 20:32:26.320861653 +0000 UTC m=+0.023225735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 01 20:32:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:26 compute-0 podman[89226]: 2025-12-01 20:32:26.460001628 +0000 UTC m=+0.162365740 container init 153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:32:26 compute-0 podman[89226]: 2025-12-01 20:32:26.468664108 +0000 UTC m=+0.171028170 container start 153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 20:32:26 compute-0 youthful_pasteur[89276]: 167 167
Dec 01 20:32:26 compute-0 systemd[1]: libpod-153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9.scope: Deactivated successfully.
Dec 01 20:32:26 compute-0 podman[89226]: 2025-12-01 20:32:26.479403873 +0000 UTC m=+0.181767945 container attach 153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:32:26 compute-0 podman[89226]: 2025-12-01 20:32:26.480398604 +0000 UTC m=+0.182762666 container died 153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3822803a199d5cb337b8369daf3e83d5bc5928262cf15f23dbd23c1f72a88f1d-merged.mount: Deactivated successfully.
Dec 01 20:32:26 compute-0 podman[89226]: 2025-12-01 20:32:26.553820801 +0000 UTC m=+0.256184863 container remove 153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:26 compute-0 systemd[1]: libpod-conmon-153bd8356314631768d51ef47c44a3f0291078cdf2576a56430faf87679350c9.scope: Deactivated successfully.
Dec 01 20:32:26 compute-0 podman[89299]: 2025-12-01 20:32:26.747267909 +0000 UTC m=+0.055777689 container create 8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 01 20:32:26 compute-0 systemd[1]: Started libpod-conmon-8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6.scope.
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.931 iops: 7150.415 elapsed_sec: 0.420
Dec 01 20:32:26 compute-0 ceph-osd[87692]: log_channel(cluster) log [WRN] : OSD bench result of 7150.415251 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 0 waiting for initial osdmap
Dec 01 20:32:26 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1[87688]: 2025-12-01T20:32:26.813+0000 7f40f0e5c640 -1 osd.1 0 waiting for initial osdmap
Dec 01 20:32:26 compute-0 podman[89299]: 2025-12-01 20:32:26.725675666 +0000 UTC m=+0.034185466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 12 check_osdmap_features require_osd_release unknown -> tentacle
Dec 01 20:32:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 12 set_numa_affinity not setting numa affinity
Dec 01 20:32:26 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-1[87688]: 2025-12-01T20:32:26.838+0000 7f40eb44f640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec 01 20:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d8361b5494d6f6718862b9ff1fb58ef738ccfb07f212539a14adb6b5d6bea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d8361b5494d6f6718862b9ff1fb58ef738ccfb07f212539a14adb6b5d6bea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d8361b5494d6f6718862b9ff1fb58ef738ccfb07f212539a14adb6b5d6bea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d8361b5494d6f6718862b9ff1fb58ef738ccfb07f212539a14adb6b5d6bea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:26 compute-0 podman[89299]: 2025-12-01 20:32:26.873743534 +0000 UTC m=+0.182253354 container init 8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haibt, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:26 compute-0 podman[89299]: 2025-12-01 20:32:26.879581518 +0000 UTC m=+0.188091308 container start 8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:26 compute-0 podman[89299]: 2025-12-01 20:32:26.882571812 +0000 UTC m=+0.191081592 container attach 8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haibt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:32:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:26 compute-0 ceph-mon[75880]: from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 13 state: booting -> active
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153] boot
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec 01 20:32:26 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:26 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:26 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:27 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 01 20:32:27 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 01 20:32:27 compute-0 lvm[89391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:32:27 compute-0 lvm[89391]: VG ceph_vg0 finished
Dec 01 20:32:27 compute-0 lvm[89393]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:32:27 compute-0 lvm[89393]: VG ceph_vg1 finished
Dec 01 20:32:27 compute-0 lvm[89394]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:32:27 compute-0 lvm[89394]: VG ceph_vg2 finished
Dec 01 20:32:27 compute-0 jolly_haibt[89316]: {}
Dec 01 20:32:27 compute-0 systemd[1]: libpod-8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6.scope: Deactivated successfully.
Dec 01 20:32:27 compute-0 systemd[1]: libpod-8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6.scope: Consumed 1.454s CPU time.
Dec 01 20:32:27 compute-0 podman[89299]: 2025-12-01 20:32:27.805767129 +0000 UTC m=+1.114276949 container died 8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb6d8361b5494d6f6718862b9ff1fb58ef738ccfb07f212539a14adb6b5d6bea-merged.mount: Deactivated successfully.
Dec 01 20:32:27 compute-0 podman[89299]: 2025-12-01 20:32:27.868216216 +0000 UTC m=+1.176725996 container remove 8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 01 20:32:27 compute-0 systemd[1]: libpod-conmon-8f84878a0d2ccc5130eb1b0468c8cc9eaccd3b56d3f42362c69cf4facfe563d6.scope: Deactivated successfully.
Dec 01 20:32:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 01 20:32:27 compute-0 sudo[88793]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:27 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 01 20:32:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec 01 20:32:27 compute-0 ceph-osd[88745]: osd.2 0 done with init, starting boot process
Dec 01 20:32:27 compute-0 ceph-osd[88745]: osd.2 0 start_boot
Dec 01 20:32:27 compute-0 ceph-osd[88745]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 01 20:32:27 compute-0 ceph-osd[88745]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 01 20:32:27 compute-0 ceph-osd[88745]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 01 20:32:27 compute-0 ceph-osd[88745]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 01 20:32:27 compute-0 ceph-osd[88745]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 01 20:32:27 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec 01 20:32:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:27 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:27 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:27 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:32:27 compute-0 ceph-mon[75880]: pgmap v30: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 01 20:32:27 compute-0 ceph-mon[75880]: OSD bench result of 7150.415251 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 01 20:32:27 compute-0 ceph-mon[75880]: from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 01 20:32:27 compute-0 ceph-mon[75880]: osd.1 [v2:192.168.122.100:6806/454414153,v1:192.168.122.100:6807/454414153] boot
Dec 01 20:32:27 compute-0 ceph-mon[75880]: osdmap e13: 3 total, 2 up, 3 in
Dec 01 20:32:27 compute-0 ceph-mon[75880]: from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 01 20:32:27 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 01 20:32:27 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:27 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1237789197; not ready for session (expect reconnect)
Dec 01 20:32:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:27 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:27 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:27 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:28 compute-0 ceph-mgr[76174]: [devicehealth INFO root] creating main.db for devicehealth
Dec 01 20:32:28 compute-0 sudo[89410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:32:28 compute-0 sudo[89410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:28 compute-0 sudo[89410]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:28 compute-0 sudo[89435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:28 compute-0 ceph-mgr[76174]: [devicehealth INFO root] Check health
Dec 01 20:32:28 compute-0 sudo[89435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:28 compute-0 sudo[89435]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:28 compute-0 ceph-mgr[76174]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec 01 20:32:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 01 20:32:28 compute-0 sudo[89472]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 01 20:32:28 compute-0 sudo[89472]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 01 20:32:28 compute-0 sudo[89472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 01 20:32:28 compute-0 sudo[89472]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 01 20:32:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 01 20:32:28 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 01 20:32:28 compute-0 sudo[89471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:32:28 compute-0 sudo[89471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 845 MiB used, 39 GiB / 40 GiB avail
Dec 01 20:32:28 compute-0 podman[89543]: 2025-12-01 20:32:28.746578631 +0000 UTC m=+0.094213687 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:28 compute-0 podman[89543]: 2025-12-01 20:32:28.865821366 +0000 UTC m=+0.213456352 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:32:28 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1237789197; not ready for session (expect reconnect)
Dec 01 20:32:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 01 20:32:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:28 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:28 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 01 20:32:28 compute-0 ceph-mon[75880]: osdmap e14: 3 total, 2 up, 3 in
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 01 20:32:28 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 01 20:32:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec 01 20:32:29 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec 01 20:32:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:29 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:29 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:29 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.xhvuzu(active, since 56s)
Dec 01 20:32:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:29 compute-0 sudo[89471]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:29 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:29 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:29 compute-0 sudo[89690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:29 compute-0 sudo[89690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:29 compute-0 sudo[89690]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:29 compute-0 sudo[89715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- inventory --format=json-pretty --filter-for-batch
Dec 01 20:32:29 compute-0 sudo[89715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:29 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1237789197; not ready for session (expect reconnect)
Dec 01 20:32:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:29 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:29 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:29 compute-0 ceph-mon[75880]: purged_snaps scrub starts
Dec 01 20:32:29 compute-0 ceph-mon[75880]: purged_snaps scrub ok
Dec 01 20:32:29 compute-0 ceph-mon[75880]: pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 845 MiB used, 39 GiB / 40 GiB avail
Dec 01 20:32:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:29 compute-0 ceph-mon[75880]: osdmap e15: 3 total, 2 up, 3 in
Dec 01 20:32:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:29 compute-0 ceph-mon[75880]: mgrmap e9: compute-0.xhvuzu(active, since 56s)
Dec 01 20:32:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:30 compute-0 podman[89751]: 2025-12-01 20:32:30.007699199 +0000 UTC m=+0.063893593 container create 9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:30 compute-0 podman[89751]: 2025-12-01 20:32:29.963374564 +0000 UTC m=+0.019568898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:30 compute-0 systemd[1]: Started libpod-conmon-9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56.scope.
Dec 01 20:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:30 compute-0 podman[89751]: 2025-12-01 20:32:30.121654777 +0000 UTC m=+0.177849201 container init 9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:32:30 compute-0 podman[89751]: 2025-12-01 20:32:30.134037527 +0000 UTC m=+0.190231871 container start 9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lehmann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 20:32:30 compute-0 agitated_lehmann[89767]: 167 167
Dec 01 20:32:30 compute-0 systemd[1]: libpod-9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56.scope: Deactivated successfully.
Dec 01 20:32:30 compute-0 podman[89751]: 2025-12-01 20:32:30.155933256 +0000 UTC m=+0.212127600 container attach 9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:30 compute-0 podman[89751]: 2025-12-01 20:32:30.156323019 +0000 UTC m=+0.212517333 container died 9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dbe260a45ff1d3c1131fd258c65f8df40d7cc586ff66b7cbca035a1b70c098d-merged.mount: Deactivated successfully.
Dec 01 20:32:30 compute-0 podman[89751]: 2025-12-01 20:32:30.233529729 +0000 UTC m=+0.289724043 container remove 9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_lehmann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:32:30 compute-0 systemd[1]: libpod-conmon-9766a594d07a96a256d2769212cae4225d06cc3c3282a5582a254b5a40568b56.scope: Deactivated successfully.
Dec 01 20:32:30 compute-0 podman[89791]: 2025-12-01 20:32:30.40186834 +0000 UTC m=+0.048317533 container create dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lederberg, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Dec 01 20:32:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 creating+peering; 0 B data, 845 MiB used, 39 GiB / 40 GiB avail
Dec 01 20:32:30 compute-0 systemd[1]: Started libpod-conmon-dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc.scope.
Dec 01 20:32:30 compute-0 podman[89791]: 2025-12-01 20:32:30.37554371 +0000 UTC m=+0.021992933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c5717835c9735062682b918577b2d4141f262a1afbe23dcd1f44f353f08805/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c5717835c9735062682b918577b2d4141f262a1afbe23dcd1f44f353f08805/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c5717835c9735062682b918577b2d4141f262a1afbe23dcd1f44f353f08805/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c5717835c9735062682b918577b2d4141f262a1afbe23dcd1f44f353f08805/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:30 compute-0 podman[89791]: 2025-12-01 20:32:30.542133346 +0000 UTC m=+0.188582629 container init dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:30 compute-0 podman[89791]: 2025-12-01 20:32:30.553577796 +0000 UTC m=+0.200027009 container start dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lederberg, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:32:30 compute-0 podman[89791]: 2025-12-01 20:32:30.576465527 +0000 UTC m=+0.222914740 container attach dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:30 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:30 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1237789197; not ready for session (expect reconnect)
Dec 01 20:32:30 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:30 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]: [
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:     {
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "available": false,
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "being_replaced": false,
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "ceph_device_lvm": false,
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "lsm_data": {},
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "lvs": [],
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "path": "/dev/sr0",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "rejected_reasons": [
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "Has a FileSystem",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "Insufficient space (<5GB)"
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         ],
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         "sys_api": {
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "actuators": null,
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "device_nodes": [
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:                 "sr0"
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             ],
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "devname": "sr0",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "human_readable_size": "482.00 KB",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "id_bus": "ata",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "model": "QEMU DVD-ROM",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "nr_requests": "2",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "parent": "/dev/sr0",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "partitions": {},
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "path": "/dev/sr0",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "removable": "1",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "rev": "2.5+",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "ro": "0",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "rotational": "1",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "sas_address": "",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "sas_device_handle": "",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "scheduler_mode": "mq-deadline",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "sectors": 0,
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "sectorsize": "2048",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "size": 493568.0,
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "support_discard": "2048",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "type": "disk",
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:             "vendor": "QEMU"
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:         }
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]:     }
Dec 01 20:32:31 compute-0 admiring_lederberg[89807]: ]
Dec 01 20:32:31 compute-0 systemd[1]: libpod-dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc.scope: Deactivated successfully.
Dec 01 20:32:31 compute-0 podman[89791]: 2025-12-01 20:32:31.086255949 +0000 UTC m=+0.732705152 container died dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-38c5717835c9735062682b918577b2d4141f262a1afbe23dcd1f44f353f08805-merged.mount: Deactivated successfully.
Dec 01 20:32:31 compute-0 podman[89791]: 2025-12-01 20:32:31.176601532 +0000 UTC m=+0.823050715 container remove dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lederberg, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:32:31 compute-0 systemd[1]: libpod-conmon-dba0fe20905d11b1485dcc1adec0c99690965ffbc75a20e37983ff55848080fc.scope: Deactivated successfully.
Dec 01 20:32:31 compute-0 sudo[89715]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 01 20:32:31 compute-0 ceph-mgr[76174]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Dec 01 20:32:31 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 01 20:32:31 compute-0 ceph-mgr[76174]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec 01 20:32:31 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:31 compute-0 sudo[90529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:31 compute-0 sudo[90529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:31 compute-0 sudo[90529]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:31 compute-0 sudo[90554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:32:31 compute-0 sudo[90554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:31 compute-0 sudo[90602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gobffuwqywuxmegbghacyyhfqzkfckhs ; /usr/bin/python3'
Dec 01 20:32:31 compute-0 sudo[90602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.342 iops: 5975.571 elapsed_sec: 0.502
Dec 01 20:32:31 compute-0 ceph-osd[88745]: log_channel(cluster) log [WRN] : OSD bench result of 5975.571065 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 0 waiting for initial osdmap
Dec 01 20:32:31 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2[88741]: 2025-12-01T20:32:31.696+0000 7f6f1fa8f640 -1 osd.2 0 waiting for initial osdmap
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 15 check_osdmap_features require_osd_release unknown -> tentacle
Dec 01 20:32:31 compute-0 podman[90617]: 2025-12-01 20:32:31.721407727 +0000 UTC m=+0.048990315 container create 3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 15 set_numa_affinity not setting numa affinity
Dec 01 20:32:31 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-osd-2[88741]: 2025-12-01T20:32:31.728+0000 7f6f1a082640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 01 20:32:31 compute-0 ceph-osd[88745]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec 01 20:32:31 compute-0 python3[90604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:31 compute-0 systemd[1]: Started libpod-conmon-3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93.scope.
Dec 01 20:32:31 compute-0 podman[90617]: 2025-12-01 20:32:31.694984304 +0000 UTC m=+0.022566892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:31 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:31 compute-0 podman[90634]: 2025-12-01 20:32:31.835608152 +0000 UTC m=+0.061639642 container create 3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217 (image=quay.io/ceph/ceph:v20, name=zen_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:31 compute-0 podman[90617]: 2025-12-01 20:32:31.844871234 +0000 UTC m=+0.172453912 container init 3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 01 20:32:31 compute-0 podman[90617]: 2025-12-01 20:32:31.859105521 +0000 UTC m=+0.186688129 container start 3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Dec 01 20:32:31 compute-0 jovial_shaw[90636]: 167 167
Dec 01 20:32:31 compute-0 podman[90617]: 2025-12-01 20:32:31.864207443 +0000 UTC m=+0.191790101 container attach 3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:32:31 compute-0 podman[90617]: 2025-12-01 20:32:31.864885183 +0000 UTC m=+0.192467801 container died 3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:31 compute-0 systemd[1]: libpod-3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93.scope: Deactivated successfully.
Dec 01 20:32:31 compute-0 systemd[1]: Started libpod-conmon-3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217.scope.
Dec 01 20:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-645eee78608e22b6a9faa10063719001a3754a10210db38224e1a05fee8b8aed-merged.mount: Deactivated successfully.
Dec 01 20:32:31 compute-0 podman[90634]: 2025-12-01 20:32:31.813979071 +0000 UTC m=+0.040010571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:31 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00d138f57f800253306daaac092d467354d8a89d977ca2739918eb1c7ea48dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00d138f57f800253306daaac092d467354d8a89d977ca2739918eb1c7ea48dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00d138f57f800253306daaac092d467354d8a89d977ca2739918eb1c7ea48dc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:31 compute-0 podman[90617]: 2025-12-01 20:32:31.916479988 +0000 UTC m=+0.244062566 container remove 3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:31 compute-0 podman[90634]: 2025-12-01 20:32:31.92894562 +0000 UTC m=+0.154977120 container init 3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217 (image=quay.io/ceph/ceph:v20, name=zen_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:32:31 compute-0 systemd[1]: libpod-conmon-3dc3328cbec6796369c388f1a50f140b949b4cde6531350e62ef6b2e72c6ea93.scope: Deactivated successfully.
Dec 01 20:32:31 compute-0 podman[90634]: 2025-12-01 20:32:31.935854558 +0000 UTC m=+0.161886048 container start 3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217 (image=quay.io/ceph/ceph:v20, name=zen_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:32:31 compute-0 podman[90634]: 2025-12-01 20:32:31.93939062 +0000 UTC m=+0.165422140 container attach 3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217 (image=quay.io/ceph/ceph:v20, name=zen_hamilton, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:31 compute-0 ceph-mgr[76174]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1237789197; not ready for session (expect reconnect)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:31 compute-0 ceph-mgr[76174]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 01 20:32:32 compute-0 ceph-mon[75880]: pgmap v35: 1 pgs: 1 creating+peering; 0 B data, 845 MiB used, 39 GiB / 40 GiB avail
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:32 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:32 compute-0 podman[90695]: 2025-12-01 20:32:32.083439415 +0000 UTC m=+0.040060722 container create 614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yalow, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:32 compute-0 systemd[1]: Started libpod-conmon-614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28.scope.
Dec 01 20:32:32 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0883a36b44db33bf8bcc5d38c51b4fb1be043d0985482a13d903e381b214ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0883a36b44db33bf8bcc5d38c51b4fb1be043d0985482a13d903e381b214ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0883a36b44db33bf8bcc5d38c51b4fb1be043d0985482a13d903e381b214ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0883a36b44db33bf8bcc5d38c51b4fb1be043d0985482a13d903e381b214ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0883a36b44db33bf8bcc5d38c51b4fb1be043d0985482a13d903e381b214ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:32 compute-0 podman[90695]: 2025-12-01 20:32:32.064767737 +0000 UTC m=+0.021389084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:32 compute-0 podman[90695]: 2025-12-01 20:32:32.17826231 +0000 UTC m=+0.134883637 container init 614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:32 compute-0 podman[90695]: 2025-12-01 20:32:32.188530554 +0000 UTC m=+0.145151891 container start 614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yalow, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 01 20:32:32 compute-0 podman[90695]: 2025-12-01 20:32:32.193717557 +0000 UTC m=+0.150338864 container attach 614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yalow, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 01 20:32:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec 01 20:32:32 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197] boot
Dec 01 20:32:32 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec 01 20:32:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 01 20:32:32 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:32 compute-0 ceph-osd[88745]: osd.2 16 state: booting -> active
Dec 01 20:32:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 01 20:32:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 01 20:32:32 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/540471222' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:32:32 compute-0 zen_hamilton[90657]: 
Dec 01 20:32:32 compute-0 zen_hamilton[90657]: {"fsid":"dcf60a89-bba0-58b0-a1bf-d4bde723201b","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1764621152,"num_in_osds":3,"osd_in_since":1764621128,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":885563392,"bytes_avail":42055720960,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2025-12-01T20:31:12:115619+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-01T20:31:12.117544+0000","services":{}},"progress_events":{}}
Dec 01 20:32:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:32:32
Dec 01 20:32:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:32:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:32:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['.mgr']
Dec 01 20:32:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:32:32 compute-0 systemd[1]: libpod-3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217.scope: Deactivated successfully.
Dec 01 20:32:32 compute-0 podman[90634]: 2025-12-01 20:32:32.453353322 +0000 UTC m=+0.679384832 container died 3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217 (image=quay.io/ceph/ceph:v20, name=zen_hamilton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a00d138f57f800253306daaac092d467354d8a89d977ca2739918eb1c7ea48dc-merged.mount: Deactivated successfully.
Dec 01 20:32:32 compute-0 podman[90634]: 2025-12-01 20:32:32.500026652 +0000 UTC m=+0.726058172 container remove 3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217 (image=quay.io/ceph/ceph:v20, name=zen_hamilton, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:32 compute-0 systemd[1]: libpod-conmon-3e9056aee576f73686adcac4dcae7eb739838ebebb7c6aaeade028c8ee2f5217.scope: Deactivated successfully.
Dec 01 20:32:32 compute-0 sudo[90602]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:32 compute-0 sharp_yalow[90714]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:32:32 compute-0 sharp_yalow[90714]: --> All data devices are unavailable
Dec 01 20:32:32 compute-0 systemd[1]: libpod-614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28.scope: Deactivated successfully.
Dec 01 20:32:32 compute-0 podman[90695]: 2025-12-01 20:32:32.717288192 +0000 UTC m=+0.673909539 container died 614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yalow, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e0883a36b44db33bf8bcc5d38c51b4fb1be043d0985482a13d903e381b214ef-merged.mount: Deactivated successfully.
Dec 01 20:32:32 compute-0 podman[90695]: 2025-12-01 20:32:32.773620546 +0000 UTC m=+0.730241863 container remove 614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:32:32 compute-0 systemd[1]: libpod-conmon-614659c0db3aa7209668048cfd660975158d87e54fb721df91a69c34afa35a28.scope: Deactivated successfully.
Dec 01 20:32:32 compute-0 sudo[90781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpghrgsrukbkuretpuplremzczllfzc ; /usr/bin/python3'
Dec 01 20:32:32 compute-0 sudo[90781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:32 compute-0 sudo[90554]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:32 compute-0 sudo[90784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:32 compute-0 sudo[90784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:32 compute-0 sudo[90784]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:32 compute-0 python3[90783]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:32 compute-0 sudo[90809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:32:32 compute-0 sudo[90809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:33 compute-0 ceph-mon[75880]: Adjusting osd_memory_target on compute-0 to 43688k
Dec 01 20:32:33 compute-0 ceph-mon[75880]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Dec 01 20:32:33 compute-0 ceph-mon[75880]: OSD bench result of 5975.571065 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 01 20:32:33 compute-0 ceph-mon[75880]: osd.2 [v2:192.168.122.100:6810/1237789197,v1:192.168.122.100:6811/1237789197] boot
Dec 01 20:32:33 compute-0 ceph-mon[75880]: osdmap e16: 3 total, 3 up, 3 in
Dec 01 20:32:33 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 01 20:32:33 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/540471222' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:32:33 compute-0 podman[90833]: 2025-12-01 20:32:33.077811383 +0000 UTC m=+0.066805504 container create 4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78 (image=quay.io/ceph/ceph:v20, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 20:32:33 compute-0 systemd[1]: Started libpod-conmon-4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78.scope.
Dec 01 20:32:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8cfc33a731a8b234a782e16656da529b48a1903b8b60df26e2a7255b3c82b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8cfc33a731a8b234a782e16656da529b48a1903b8b60df26e2a7255b3c82b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:33 compute-0 podman[90833]: 2025-12-01 20:32:33.155711286 +0000 UTC m=+0.144705477 container init 4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78 (image=quay.io/ceph/ceph:v20, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:32:33 compute-0 podman[90833]: 2025-12-01 20:32:33.057290017 +0000 UTC m=+0.046284178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:33 compute-0 podman[90833]: 2025-12-01 20:32:33.163812891 +0000 UTC m=+0.152807042 container start 4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78 (image=quay.io/ceph/ceph:v20, name=infallible_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 01 20:32:33 compute-0 podman[90833]: 2025-12-01 20:32:33.168263762 +0000 UTC m=+0.157257913 container attach 4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78 (image=quay.io/ceph/ceph:v20, name=infallible_ptolemy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0032335874926743505 quantized to 1 (current 1)
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:32:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:32:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 01 20:32:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec 01 20:32:33 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec 01 20:32:33 compute-0 podman[90867]: 2025-12-01 20:32:33.338273694 +0000 UTC m=+0.063804500 container create bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:33 compute-0 systemd[1]: Started libpod-conmon-bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482.scope.
Dec 01 20:32:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:33 compute-0 podman[90867]: 2025-12-01 20:32:33.31050194 +0000 UTC m=+0.036032786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:33 compute-0 podman[90867]: 2025-12-01 20:32:33.413263905 +0000 UTC m=+0.138794781 container init bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chaum, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:33 compute-0 podman[90867]: 2025-12-01 20:32:33.419681947 +0000 UTC m=+0.145212753 container start bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:33 compute-0 priceless_chaum[90900]: 167 167
Dec 01 20:32:33 compute-0 systemd[1]: libpod-bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482.scope: Deactivated successfully.
Dec 01 20:32:33 compute-0 podman[90867]: 2025-12-01 20:32:33.423546139 +0000 UTC m=+0.149077045 container attach bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chaum, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:33 compute-0 podman[90867]: 2025-12-01 20:32:33.424790178 +0000 UTC m=+0.150321054 container died bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chaum, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:32:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4acd0800e71625fba66da5a902104f9d59ccfa099ce41851e38a10afa4110cef-merged.mount: Deactivated successfully.
Dec 01 20:32:33 compute-0 podman[90867]: 2025-12-01 20:32:33.466228473 +0000 UTC m=+0.191759279 container remove bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 20:32:33 compute-0 systemd[1]: libpod-conmon-bab08691aaa3d3d00031bdfdbe8e365cf20ad3ded7c0cefd0ca8ddf7a5543482.scope: Deactivated successfully.
Dec 01 20:32:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 01 20:32:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/76791224' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:33 compute-0 podman[90927]: 2025-12-01 20:32:33.611427915 +0000 UTC m=+0.037777701 container create 1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:33 compute-0 systemd[1]: Started libpod-conmon-1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259.scope.
Dec 01 20:32:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b603f502aa1197dccb95e89c5609ee88d45ff95cad127b7f16d7ac83ffb361b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b603f502aa1197dccb95e89c5609ee88d45ff95cad127b7f16d7ac83ffb361b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b603f502aa1197dccb95e89c5609ee88d45ff95cad127b7f16d7ac83ffb361b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b603f502aa1197dccb95e89c5609ee88d45ff95cad127b7f16d7ac83ffb361b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:33 compute-0 podman[90927]: 2025-12-01 20:32:33.59636525 +0000 UTC m=+0.022715056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:33 compute-0 podman[90927]: 2025-12-01 20:32:33.703773033 +0000 UTC m=+0.130122839 container init 1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_buck, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:33 compute-0 podman[90927]: 2025-12-01 20:32:33.718125024 +0000 UTC m=+0.144474810 container start 1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_buck, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:32:33 compute-0 podman[90927]: 2025-12-01 20:32:33.721290473 +0000 UTC m=+0.147640269 container attach 1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_buck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:34 compute-0 charming_buck[90944]: {
Dec 01 20:32:34 compute-0 charming_buck[90944]:     "0": [
Dec 01 20:32:34 compute-0 charming_buck[90944]:         {
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "devices": [
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "/dev/loop3"
Dec 01 20:32:34 compute-0 charming_buck[90944]:             ],
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_name": "ceph_lv0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_size": "21470642176",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "name": "ceph_lv0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "tags": {
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.crush_device_class": "",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.encrypted": "0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osd_id": "0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.type": "block",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.vdo": "0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.with_tpm": "0"
Dec 01 20:32:34 compute-0 charming_buck[90944]:             },
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "type": "block",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "vg_name": "ceph_vg0"
Dec 01 20:32:34 compute-0 charming_buck[90944]:         }
Dec 01 20:32:34 compute-0 charming_buck[90944]:     ],
Dec 01 20:32:34 compute-0 charming_buck[90944]:     "1": [
Dec 01 20:32:34 compute-0 charming_buck[90944]:         {
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "devices": [
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "/dev/loop4"
Dec 01 20:32:34 compute-0 charming_buck[90944]:             ],
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_name": "ceph_lv1",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_size": "21470642176",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "name": "ceph_lv1",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "tags": {
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.crush_device_class": "",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.encrypted": "0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osd_id": "1",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.type": "block",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.vdo": "0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.with_tpm": "0"
Dec 01 20:32:34 compute-0 charming_buck[90944]:             },
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "type": "block",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "vg_name": "ceph_vg1"
Dec 01 20:32:34 compute-0 charming_buck[90944]:         }
Dec 01 20:32:34 compute-0 charming_buck[90944]:     ],
Dec 01 20:32:34 compute-0 charming_buck[90944]:     "2": [
Dec 01 20:32:34 compute-0 charming_buck[90944]:         {
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "devices": [
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "/dev/loop5"
Dec 01 20:32:34 compute-0 charming_buck[90944]:             ],
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_name": "ceph_lv2",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_size": "21470642176",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "name": "ceph_lv2",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "tags": {
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.crush_device_class": "",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.encrypted": "0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osd_id": "2",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.type": "block",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.vdo": "0",
Dec 01 20:32:34 compute-0 charming_buck[90944]:                 "ceph.with_tpm": "0"
Dec 01 20:32:34 compute-0 charming_buck[90944]:             },
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "type": "block",
Dec 01 20:32:34 compute-0 charming_buck[90944]:             "vg_name": "ceph_vg2"
Dec 01 20:32:34 compute-0 charming_buck[90944]:         }
Dec 01 20:32:34 compute-0 charming_buck[90944]:     ]
Dec 01 20:32:34 compute-0 charming_buck[90944]: }
Dec 01 20:32:34 compute-0 ceph-mon[75880]: pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 01 20:32:34 compute-0 ceph-mon[75880]: osdmap e17: 3 total, 3 up, 3 in
Dec 01 20:32:34 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/76791224' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:34 compute-0 systemd[1]: libpod-1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259.scope: Deactivated successfully.
Dec 01 20:32:34 compute-0 podman[90927]: 2025-12-01 20:32:34.045741769 +0000 UTC m=+0.472091595 container died 1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b603f502aa1197dccb95e89c5609ee88d45ff95cad127b7f16d7ac83ffb361b-merged.mount: Deactivated successfully.
Dec 01 20:32:34 compute-0 podman[90927]: 2025-12-01 20:32:34.103337183 +0000 UTC m=+0.529686969 container remove 1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_buck, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:34 compute-0 systemd[1]: libpod-conmon-1df2f89df6ffc2752d5d0fe01a95f389b25d088967798657cc34a2d39c0c5259.scope: Deactivated successfully.
Dec 01 20:32:34 compute-0 sudo[90809]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:34 compute-0 sudo[90963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:34 compute-0 sudo[90963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:34 compute-0 sudo[90963]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:34 compute-0 sudo[90988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:32:34 compute-0 sudo[90988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 01 20:32:34 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/76791224' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 01 20:32:34 compute-0 infallible_ptolemy[90850]: pool 'vms' created
Dec 01 20:32:34 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 01 20:32:34 compute-0 systemd[1]: libpod-4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78.scope: Deactivated successfully.
Dec 01 20:32:34 compute-0 podman[90833]: 2025-12-01 20:32:34.307896024 +0000 UTC m=+1.296890165 container died 4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78 (image=quay.io/ceph/ceph:v20, name=infallible_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf8cfc33a731a8b234a782e16656da529b48a1903b8b60df26e2a7255b3c82b5-merged.mount: Deactivated successfully.
Dec 01 20:32:34 compute-0 podman[90833]: 2025-12-01 20:32:34.356081451 +0000 UTC m=+1.345075572 container remove 4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78 (image=quay.io/ceph/ceph:v20, name=infallible_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:32:34 compute-0 systemd[1]: libpod-conmon-4d38b88aef69c03d2c1de17f20f2cfc25547be74d395dd827ccaa454bd24df78.scope: Deactivated successfully.
Dec 01 20:32:34 compute-0 sudo[90781]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v40: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec 01 20:32:34 compute-0 sudo[91051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvyozhcbaauithkbafzulnjmyymfcel ; /usr/bin/python3'
Dec 01 20:32:34 compute-0 sudo[91051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:34 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:32:34 compute-0 podman[91064]: 2025-12-01 20:32:34.636433297 +0000 UTC m=+0.071096209 container create 5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 01 20:32:34 compute-0 systemd[1]: Started libpod-conmon-5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d.scope.
Dec 01 20:32:34 compute-0 python3[91063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:34 compute-0 podman[91064]: 2025-12-01 20:32:34.605872635 +0000 UTC m=+0.040535577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:34 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:34 compute-0 podman[91064]: 2025-12-01 20:32:34.732085559 +0000 UTC m=+0.166748521 container init 5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:34 compute-0 podman[91064]: 2025-12-01 20:32:34.740560756 +0000 UTC m=+0.175223628 container start 5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 01 20:32:34 compute-0 podman[91064]: 2025-12-01 20:32:34.744992926 +0000 UTC m=+0.179655838 container attach 5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:32:34 compute-0 optimistic_zhukovsky[91080]: 167 167
Dec 01 20:32:34 compute-0 systemd[1]: libpod-5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d.scope: Deactivated successfully.
Dec 01 20:32:34 compute-0 podman[91064]: 2025-12-01 20:32:34.747716222 +0000 UTC m=+0.182379134 container died 5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 01 20:32:34 compute-0 podman[91083]: 2025-12-01 20:32:34.764912422 +0000 UTC m=+0.064566163 container create da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664 (image=quay.io/ceph/ceph:v20, name=mystifying_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:32:34 compute-0 systemd[1]: Started libpod-conmon-da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664.scope.
Dec 01 20:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-29075cc1102488fac239126b7cafe6ee7d1a2b3a8a3077ef486a7f9abf465660-merged.mount: Deactivated successfully.
Dec 01 20:32:34 compute-0 podman[91064]: 2025-12-01 20:32:34.818388856 +0000 UTC m=+0.253051738 container remove 5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:34 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:34 compute-0 systemd[1]: libpod-conmon-5f2baa840419ad7f2e049c1e8468986e59e18edb850b76091bfd3e5e9d00d84d.scope: Deactivated successfully.
Dec 01 20:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38f89597d889029f8afad2915fcd66985053f8ba0a12266031bd49c85adbf81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38f89597d889029f8afad2915fcd66985053f8ba0a12266031bd49c85adbf81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:34 compute-0 podman[91083]: 2025-12-01 20:32:34.739522064 +0000 UTC m=+0.039175775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:34 compute-0 podman[91083]: 2025-12-01 20:32:34.84993125 +0000 UTC m=+0.149584961 container init da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664 (image=quay.io/ceph/ceph:v20, name=mystifying_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:32:34 compute-0 podman[91083]: 2025-12-01 20:32:34.85914839 +0000 UTC m=+0.158802111 container start da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664 (image=quay.io/ceph/ceph:v20, name=mystifying_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:32:34 compute-0 podman[91083]: 2025-12-01 20:32:34.864167858 +0000 UTC m=+0.163821569 container attach da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664 (image=quay.io/ceph/ceph:v20, name=mystifying_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:35 compute-0 podman[91124]: 2025-12-01 20:32:35.003710751 +0000 UTC m=+0.049300153 container create 3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:32:35 compute-0 systemd[1]: Started libpod-conmon-3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d.scope.
Dec 01 20:32:35 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9241f1ea04a02175fbdd73190de0e2cc2b79bed67239c0fdd1702eaefb5038b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9241f1ea04a02175fbdd73190de0e2cc2b79bed67239c0fdd1702eaefb5038b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9241f1ea04a02175fbdd73190de0e2cc2b79bed67239c0fdd1702eaefb5038b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9241f1ea04a02175fbdd73190de0e2cc2b79bed67239c0fdd1702eaefb5038b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:35 compute-0 podman[91124]: 2025-12-01 20:32:35.076515044 +0000 UTC m=+0.122104496 container init 3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:35 compute-0 podman[91124]: 2025-12-01 20:32:35.087906193 +0000 UTC m=+0.133495595 container start 3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:35 compute-0 podman[91124]: 2025-12-01 20:32:34.988991828 +0000 UTC m=+0.034581240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:35 compute-0 podman[91124]: 2025-12-01 20:32:35.091424324 +0000 UTC m=+0.137013756 container attach 3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 01 20:32:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1454178845' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 01 20:32:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1454178845' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 01 20:32:35 compute-0 mystifying_bassi[91113]: pool 'volumes' created
Dec 01 20:32:35 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/76791224' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:35 compute-0 ceph-mon[75880]: osdmap e18: 3 total, 3 up, 3 in
Dec 01 20:32:35 compute-0 ceph-mon[75880]: pgmap v40: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec 01 20:32:35 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1454178845' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:35 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 01 20:32:35 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:32:35 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:32:35 compute-0 systemd[1]: libpod-da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664.scope: Deactivated successfully.
Dec 01 20:32:35 compute-0 podman[91083]: 2025-12-01 20:32:35.335570411 +0000 UTC m=+0.635224112 container died da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664 (image=quay.io/ceph/ceph:v20, name=mystifying_bassi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38f89597d889029f8afad2915fcd66985053f8ba0a12266031bd49c85adbf81-merged.mount: Deactivated successfully.
Dec 01 20:32:35 compute-0 podman[91083]: 2025-12-01 20:32:35.387427043 +0000 UTC m=+0.687080744 container remove da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664 (image=quay.io/ceph/ceph:v20, name=mystifying_bassi, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:35 compute-0 systemd[1]: libpod-conmon-da3e845b812c5af4eaa5362046be179db46183c94b9d687abebafd2f39e4a664.scope: Deactivated successfully.
Dec 01 20:32:35 compute-0 sudo[91051]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:35 compute-0 sudo[91243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyobgktykwldsmjgjrprazfcbjrwruxv ; /usr/bin/python3'
Dec 01 20:32:35 compute-0 sudo[91243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:35 compute-0 python3[91250]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:35 compute-0 podman[91270]: 2025-12-01 20:32:35.703433033 +0000 UTC m=+0.040273879 container create 8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad (image=quay.io/ceph/ceph:v20, name=upbeat_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:35 compute-0 systemd[1]: Started libpod-conmon-8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad.scope.
Dec 01 20:32:35 compute-0 lvm[91298]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:32:35 compute-0 lvm[91298]: VG ceph_vg1 finished
Dec 01 20:32:35 compute-0 lvm[91297]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:32:35 compute-0 lvm[91297]: VG ceph_vg0 finished
Dec 01 20:32:35 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb360ec9cb1ac52ad73c2fb593e65b15e6247de24f7550d47a8d99893af6d0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb360ec9cb1ac52ad73c2fb593e65b15e6247de24f7550d47a8d99893af6d0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:35 compute-0 lvm[91300]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:32:35 compute-0 lvm[91300]: VG ceph_vg2 finished
Dec 01 20:32:35 compute-0 podman[91270]: 2025-12-01 20:32:35.68491561 +0000 UTC m=+0.021756396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:35 compute-0 podman[91270]: 2025-12-01 20:32:35.791565878 +0000 UTC m=+0.128406754 container init 8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad (image=quay.io/ceph/ceph:v20, name=upbeat_swanson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:35 compute-0 podman[91270]: 2025-12-01 20:32:35.802099209 +0000 UTC m=+0.138939995 container start 8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad (image=quay.io/ceph/ceph:v20, name=upbeat_swanson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:35 compute-0 podman[91270]: 2025-12-01 20:32:35.80561654 +0000 UTC m=+0.142457366 container attach 8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad (image=quay.io/ceph/ceph:v20, name=upbeat_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:35 compute-0 crazy_satoshi[91160]: {}
Dec 01 20:32:35 compute-0 podman[91124]: 2025-12-01 20:32:35.881389746 +0000 UTC m=+0.926979148 container died 3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_satoshi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:32:35 compute-0 systemd[1]: libpod-3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d.scope: Deactivated successfully.
Dec 01 20:32:35 compute-0 systemd[1]: libpod-3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d.scope: Consumed 1.213s CPU time.
Dec 01 20:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9241f1ea04a02175fbdd73190de0e2cc2b79bed67239c0fdd1702eaefb5038b0-merged.mount: Deactivated successfully.
Dec 01 20:32:35 compute-0 podman[91124]: 2025-12-01 20:32:35.916263964 +0000 UTC m=+0.961853356 container remove 3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:32:35 compute-0 systemd[1]: libpod-conmon-3953ff4a4d2fd6513dcac0bdc4f9307069d1c3ec737594bd73f17b60f7487e0d.scope: Deactivated successfully.
Dec 01 20:32:35 compute-0 sudo[90988]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:36 compute-0 sudo[91334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:32:36 compute-0 sudo[91334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:36 compute-0 sudo[91334]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 01 20:32:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/136114924' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 01 20:32:36 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1454178845' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:36 compute-0 ceph-mon[75880]: osdmap e19: 3 total, 3 up, 3 in
Dec 01 20:32:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:36 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/136114924' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/136114924' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 01 20:32:36 compute-0 upbeat_swanson[91291]: pool 'backups' created
Dec 01 20:32:36 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 01 20:32:36 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:32:36 compute-0 systemd[1]: libpod-8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad.scope: Deactivated successfully.
Dec 01 20:32:36 compute-0 podman[91270]: 2025-12-01 20:32:36.36173106 +0000 UTC m=+0.698571866 container died 8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad (image=quay.io/ceph/ceph:v20, name=upbeat_swanson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb360ec9cb1ac52ad73c2fb593e65b15e6247de24f7550d47a8d99893af6d0d-merged.mount: Deactivated successfully.
Dec 01 20:32:36 compute-0 podman[91270]: 2025-12-01 20:32:36.419440387 +0000 UTC m=+0.756281203 container remove 8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad (image=quay.io/ceph/ceph:v20, name=upbeat_swanson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Dec 01 20:32:36 compute-0 systemd[1]: libpod-conmon-8d65482d4dbe21b31588ac6d42dfdd77a8c1e2687aea212b480a47e5bf705bad.scope: Deactivated successfully.
Dec 01 20:32:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v43: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec 01 20:32:36 compute-0 sudo[91243]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:36 compute-0 sudo[91398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olewmjkjkovaoaxfmelecldvzpnahhzp ; /usr/bin/python3'
Dec 01 20:32:36 compute-0 sudo[91398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:36 compute-0 python3[91400]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:36 compute-0 podman[91401]: 2025-12-01 20:32:36.846827594 +0000 UTC m=+0.062387596 container create e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8 (image=quay.io/ceph/ceph:v20, name=hungry_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:32:36 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:32:36 compute-0 systemd[1]: Started libpod-conmon-e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8.scope.
Dec 01 20:32:36 compute-0 podman[91401]: 2025-12-01 20:32:36.81527839 +0000 UTC m=+0.030838432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:36 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bafbde22735c1a033ffbf115152882638e593266ef2f3e7cdafe8fddd0c1c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bafbde22735c1a033ffbf115152882638e593266ef2f3e7cdafe8fddd0c1c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:36 compute-0 podman[91401]: 2025-12-01 20:32:36.944608772 +0000 UTC m=+0.160168754 container init e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8 (image=quay.io/ceph/ceph:v20, name=hungry_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 20:32:36 compute-0 podman[91401]: 2025-12-01 20:32:36.955790024 +0000 UTC m=+0.171349986 container start e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8 (image=quay.io/ceph/ceph:v20, name=hungry_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:32:36 compute-0 podman[91401]: 2025-12-01 20:32:36.967824794 +0000 UTC m=+0.183384796 container attach e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8 (image=quay.io/ceph/ceph:v20, name=hungry_turing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 01 20:32:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 01 20:32:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 01 20:32:37 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/136114924' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:37 compute-0 ceph-mon[75880]: osdmap e20: 3 total, 3 up, 3 in
Dec 01 20:32:37 compute-0 ceph-mon[75880]: pgmap v43: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Dec 01 20:32:37 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 01 20:32:37 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:32:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 01 20:32:37 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2880085936' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 01 20:32:38 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2880085936' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 01 20:32:38 compute-0 hungry_turing[91416]: pool 'images' created
Dec 01 20:32:38 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 01 20:32:38 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:32:38 compute-0 ceph-mon[75880]: osdmap e21: 3 total, 3 up, 3 in
Dec 01 20:32:38 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2880085936' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:38 compute-0 systemd[1]: libpod-e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8.scope: Deactivated successfully.
Dec 01 20:32:38 compute-0 podman[91401]: 2025-12-01 20:32:38.375633629 +0000 UTC m=+1.591193591 container died e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8 (image=quay.io/ceph/ceph:v20, name=hungry_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1bafbde22735c1a033ffbf115152882638e593266ef2f3e7cdafe8fddd0c1c6-merged.mount: Deactivated successfully.
Dec 01 20:32:38 compute-0 podman[91401]: 2025-12-01 20:32:38.427063928 +0000 UTC m=+1.642623890 container remove e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8 (image=quay.io/ceph/ceph:v20, name=hungry_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 01 20:32:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v46: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:38 compute-0 systemd[1]: libpod-conmon-e15d657f7a6f9f0cd3dddaf7edca75196d61e2fb152a2b82724007fa7a8392c8.scope: Deactivated successfully.
Dec 01 20:32:38 compute-0 sudo[91398]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:38 compute-0 sudo[91479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etumlgfpyiecdfgkmangkqcpfevvrfbw ; /usr/bin/python3'
Dec 01 20:32:38 compute-0 sudo[91479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:38 compute-0 python3[91481]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:38 compute-0 podman[91482]: 2025-12-01 20:32:38.864550523 +0000 UTC m=+0.056461159 container create dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8 (image=quay.io/ceph/ceph:v20, name=keen_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:38 compute-0 systemd[1]: Started libpod-conmon-dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8.scope.
Dec 01 20:32:38 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b69aaa761b857e1ad992c4604260360156bd721809e26bb6bef619fcd361667/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b69aaa761b857e1ad992c4604260360156bd721809e26bb6bef619fcd361667/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:38 compute-0 podman[91482]: 2025-12-01 20:32:38.838071629 +0000 UTC m=+0.029982345 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:38 compute-0 podman[91482]: 2025-12-01 20:32:38.937680385 +0000 UTC m=+0.129591071 container init dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8 (image=quay.io/ceph/ceph:v20, name=keen_cartwright, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:38 compute-0 podman[91482]: 2025-12-01 20:32:38.944785528 +0000 UTC m=+0.136696154 container start dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8 (image=quay.io/ceph/ceph:v20, name=keen_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:38 compute-0 podman[91482]: 2025-12-01 20:32:38.948483725 +0000 UTC m=+0.140394441 container attach dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8 (image=quay.io/ceph/ceph:v20, name=keen_cartwright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:32:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 01 20:32:39 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/567654850' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 01 20:32:39 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/567654850' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 01 20:32:39 compute-0 keen_cartwright[91497]: pool 'cephfs.cephfs.meta' created
Dec 01 20:32:39 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 01 20:32:39 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2880085936' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:39 compute-0 ceph-mon[75880]: osdmap e22: 3 total, 3 up, 3 in
Dec 01 20:32:39 compute-0 ceph-mon[75880]: pgmap v46: 5 pgs: 4 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:39 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/567654850' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:39 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/567654850' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:39 compute-0 ceph-mon[75880]: osdmap e23: 3 total, 3 up, 3 in
Dec 01 20:32:39 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:32:39 compute-0 systemd[1]: libpod-dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8.scope: Deactivated successfully.
Dec 01 20:32:39 compute-0 podman[91482]: 2025-12-01 20:32:39.380914171 +0000 UTC m=+0.572824787 container died dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8 (image=quay.io/ceph/ceph:v20, name=keen_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 01 20:32:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b69aaa761b857e1ad992c4604260360156bd721809e26bb6bef619fcd361667-merged.mount: Deactivated successfully.
Dec 01 20:32:39 compute-0 podman[91482]: 2025-12-01 20:32:39.466406152 +0000 UTC m=+0.658316808 container remove dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8 (image=quay.io/ceph/ceph:v20, name=keen_cartwright, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:39 compute-0 systemd[1]: libpod-conmon-dd7e27cf0e3036e0c5d958e77afbafa02df3988c98768bdfe172632bf3de34a8.scope: Deactivated successfully.
Dec 01 20:32:39 compute-0 sudo[91479]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:39 compute-0 sudo[91561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczfbopriazyyvrkfsitqcojnitnhqvm ; /usr/bin/python3'
Dec 01 20:32:39 compute-0 sudo[91561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:39 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:32:39 compute-0 python3[91563]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:39 compute-0 podman[91564]: 2025-12-01 20:32:39.948168121 +0000 UTC m=+0.069646054 container create ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f (image=quay.io/ceph/ceph:v20, name=vibrant_margulis, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:39 compute-0 systemd[1]: Started libpod-conmon-ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f.scope.
Dec 01 20:32:40 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db0d97157a20aa65e917731ba40d58e1af43ad44163acd5dfe712c12a7fcf53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db0d97157a20aa65e917731ba40d58e1af43ad44163acd5dfe712c12a7fcf53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:40 compute-0 podman[91564]: 2025-12-01 20:32:39.925846708 +0000 UTC m=+0.047324721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:40 compute-0 podman[91564]: 2025-12-01 20:32:40.026166617 +0000 UTC m=+0.147644580 container init ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f (image=quay.io/ceph/ceph:v20, name=vibrant_margulis, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 01 20:32:40 compute-0 podman[91564]: 2025-12-01 20:32:40.031497274 +0000 UTC m=+0.152975197 container start ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f (image=quay.io/ceph/ceph:v20, name=vibrant_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:40 compute-0 podman[91564]: 2025-12-01 20:32:40.034751367 +0000 UTC m=+0.156229350 container attach ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f (image=quay.io/ceph/ceph:v20, name=vibrant_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:32:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 01 20:32:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 01 20:32:40 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 01 20:32:40 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:32:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v49: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 01 20:32:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2288043917' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 01 20:32:41 compute-0 ceph-mon[75880]: osdmap e24: 3 total, 3 up, 3 in
Dec 01 20:32:41 compute-0 ceph-mon[75880]: pgmap v49: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:41 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2288043917' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 01 20:32:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2288043917' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 01 20:32:41 compute-0 vibrant_margulis[91579]: pool 'cephfs.cephfs.data' created
Dec 01 20:32:41 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 01 20:32:41 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:32:41 compute-0 systemd[1]: libpod-ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f.scope: Deactivated successfully.
Dec 01 20:32:41 compute-0 podman[91606]: 2025-12-01 20:32:41.470705889 +0000 UTC m=+0.027083654 container died ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f (image=quay.io/ceph/ceph:v20, name=vibrant_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db0d97157a20aa65e917731ba40d58e1af43ad44163acd5dfe712c12a7fcf53-merged.mount: Deactivated successfully.
Dec 01 20:32:41 compute-0 podman[91606]: 2025-12-01 20:32:41.518460802 +0000 UTC m=+0.074838527 container remove ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f (image=quay.io/ceph/ceph:v20, name=vibrant_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 01 20:32:41 compute-0 systemd[1]: libpod-conmon-ae0fe98fdc0645a590594713724c9fca9827f2d55b94dc6c150360e83304740f.scope: Deactivated successfully.
Dec 01 20:32:41 compute-0 sudo[91561]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:41 compute-0 sudo[91644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlpcnheqaxcwnkouoqovkeacyfzzylqs ; /usr/bin/python3'
Dec 01 20:32:41 compute-0 sudo[91644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:41 compute-0 python3[91646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:42 compute-0 podman[91647]: 2025-12-01 20:32:42.009241585 +0000 UTC m=+0.052605187 container create 56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192 (image=quay.io/ceph/ceph:v20, name=sad_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:42 compute-0 systemd[1]: Started libpod-conmon-56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192.scope.
Dec 01 20:32:42 compute-0 podman[91647]: 2025-12-01 20:32:41.981284335 +0000 UTC m=+0.024648007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:42 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd18878df1bd22bc690f6e9ca43ce2715a34d324c251c888d16c6b8ced5243d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd18878df1bd22bc690f6e9ca43ce2715a34d324c251c888d16c6b8ced5243d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:42 compute-0 podman[91647]: 2025-12-01 20:32:42.112282559 +0000 UTC m=+0.155646231 container init 56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192 (image=quay.io/ceph/ceph:v20, name=sad_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:42 compute-0 podman[91647]: 2025-12-01 20:32:42.126336382 +0000 UTC m=+0.169699984 container start 56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192 (image=quay.io/ceph/ceph:v20, name=sad_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:42 compute-0 podman[91647]: 2025-12-01 20:32:42.130441031 +0000 UTC m=+0.173804643 container attach 56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192 (image=quay.io/ceph/ceph:v20, name=sad_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:32:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 01 20:32:42 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2288043917' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 01 20:32:42 compute-0 ceph-mon[75880]: osdmap e25: 3 total, 3 up, 3 in
Dec 01 20:32:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 01 20:32:42 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 01 20:32:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:32:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v52: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 01 20:32:42 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/293201986' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 01 20:32:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 01 20:32:43 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/293201986' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 01 20:32:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 01 20:32:43 compute-0 sad_hopper[91662]: enabled application 'rbd' on pool 'vms'
Dec 01 20:32:43 compute-0 ceph-mon[75880]: osdmap e26: 3 total, 3 up, 3 in
Dec 01 20:32:43 compute-0 ceph-mon[75880]: pgmap v52: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:43 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/293201986' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 01 20:32:43 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 01 20:32:43 compute-0 systemd[1]: libpod-56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192.scope: Deactivated successfully.
Dec 01 20:32:43 compute-0 podman[91647]: 2025-12-01 20:32:43.449504002 +0000 UTC m=+1.492867634 container died 56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192 (image=quay.io/ceph/ceph:v20, name=sad_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd18878df1bd22bc690f6e9ca43ce2715a34d324c251c888d16c6b8ced5243d3-merged.mount: Deactivated successfully.
Dec 01 20:32:43 compute-0 podman[91647]: 2025-12-01 20:32:43.501483099 +0000 UTC m=+1.544846741 container remove 56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192 (image=quay.io/ceph/ceph:v20, name=sad_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 01 20:32:43 compute-0 systemd[1]: libpod-conmon-56cd688c75e51d0ac0549b25e520753439186b341d91aee2a5964a289aa18192.scope: Deactivated successfully.
Dec 01 20:32:43 compute-0 sudo[91644]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:43 compute-0 sudo[91722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljieoalvegtpoxnugadmvlyupmxdcysp ; /usr/bin/python3'
Dec 01 20:32:43 compute-0 sudo[91722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:43 compute-0 python3[91724]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:43 compute-0 podman[91725]: 2025-12-01 20:32:43.975442072 +0000 UTC m=+0.079450543 container create 3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355 (image=quay.io/ceph/ceph:v20, name=adoring_turing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 20:32:44 compute-0 systemd[1]: Started libpod-conmon-3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355.scope.
Dec 01 20:32:44 compute-0 podman[91725]: 2025-12-01 20:32:43.936401752 +0000 UTC m=+0.040410273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ee7e79565cdac0b78966b85af85b17a3ed15f44391d38ba6a961e80a38df1f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ee7e79565cdac0b78966b85af85b17a3ed15f44391d38ba6a961e80a38df1f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:44 compute-0 podman[91725]: 2025-12-01 20:32:44.070002729 +0000 UTC m=+0.174011260 container init 3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355 (image=quay.io/ceph/ceph:v20, name=adoring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:44 compute-0 podman[91725]: 2025-12-01 20:32:44.079450166 +0000 UTC m=+0.183458627 container start 3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355 (image=quay.io/ceph/ceph:v20, name=adoring_turing, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:32:44 compute-0 podman[91725]: 2025-12-01 20:32:44.083714851 +0000 UTC m=+0.187723372 container attach 3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355 (image=quay.io/ceph/ceph:v20, name=adoring_turing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:44 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:44 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/293201986' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 01 20:32:44 compute-0 ceph-mon[75880]: osdmap e27: 3 total, 3 up, 3 in
Dec 01 20:32:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:44 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 01 20:32:44 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4048735138' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 01 20:32:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 01 20:32:45 compute-0 ceph-mon[75880]: pgmap v54: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:45 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/4048735138' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 01 20:32:45 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4048735138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 01 20:32:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 01 20:32:45 compute-0 adoring_turing[91740]: enabled application 'rbd' on pool 'volumes'
Dec 01 20:32:45 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 01 20:32:45 compute-0 systemd[1]: libpod-3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355.scope: Deactivated successfully.
Dec 01 20:32:45 compute-0 podman[91725]: 2025-12-01 20:32:45.480810329 +0000 UTC m=+1.584818800 container died 3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355 (image=quay.io/ceph/ceph:v20, name=adoring_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 01 20:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ee7e79565cdac0b78966b85af85b17a3ed15f44391d38ba6a961e80a38df1f-merged.mount: Deactivated successfully.
Dec 01 20:32:45 compute-0 podman[91725]: 2025-12-01 20:32:45.538870237 +0000 UTC m=+1.642878718 container remove 3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355 (image=quay.io/ceph/ceph:v20, name=adoring_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:32:45 compute-0 systemd[1]: libpod-conmon-3d84e453cb2ade9f3f04238e7cd860b223b93e2bf90b33e8044ee2a03dd67355.scope: Deactivated successfully.
Dec 01 20:32:45 compute-0 sudo[91722]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:45 compute-0 sudo[91799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxyshthewgmazrmgxhjsqtzflqkviuhx ; /usr/bin/python3'
Dec 01 20:32:45 compute-0 sudo[91799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:45 compute-0 python3[91801]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:45 compute-0 podman[91802]: 2025-12-01 20:32:45.924986214 +0000 UTC m=+0.066308708 container create 54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966 (image=quay.io/ceph/ceph:v20, name=jovial_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:32:45 compute-0 systemd[1]: Started libpod-conmon-54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966.scope.
Dec 01 20:32:45 compute-0 podman[91802]: 2025-12-01 20:32:45.895242498 +0000 UTC m=+0.036565042 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:45 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a6c390baa4b62405c019c465fac6be2d3dde61d53e8b258ee854a8d3cf190a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a6c390baa4b62405c019c465fac6be2d3dde61d53e8b258ee854a8d3cf190a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:46 compute-0 podman[91802]: 2025-12-01 20:32:46.012049426 +0000 UTC m=+0.153371910 container init 54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966 (image=quay.io/ceph/ceph:v20, name=jovial_lederberg, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:46 compute-0 podman[91802]: 2025-12-01 20:32:46.01919841 +0000 UTC m=+0.160520864 container start 54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966 (image=quay.io/ceph/ceph:v20, name=jovial_lederberg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 01 20:32:46 compute-0 podman[91802]: 2025-12-01 20:32:46.023094383 +0000 UTC m=+0.164416847 container attach 54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966 (image=quay.io/ceph/ceph:v20, name=jovial_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 01 20:32:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2613164982' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 01 20:32:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 01 20:32:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2613164982' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 01 20:32:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 01 20:32:46 compute-0 jovial_lederberg[91817]: enabled application 'rbd' on pool 'backups'
Dec 01 20:32:46 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 01 20:32:46 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/4048735138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 01 20:32:46 compute-0 ceph-mon[75880]: osdmap e28: 3 total, 3 up, 3 in
Dec 01 20:32:46 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2613164982' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 01 20:32:46 compute-0 systemd[1]: libpod-54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966.scope: Deactivated successfully.
Dec 01 20:32:46 compute-0 podman[91802]: 2025-12-01 20:32:46.489538909 +0000 UTC m=+0.630861393 container died 54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966 (image=quay.io/ceph/ceph:v20, name=jovial_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:32:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a6c390baa4b62405c019c465fac6be2d3dde61d53e8b258ee854a8d3cf190a-merged.mount: Deactivated successfully.
Dec 01 20:32:46 compute-0 podman[91802]: 2025-12-01 20:32:46.538509322 +0000 UTC m=+0.679831816 container remove 54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966 (image=quay.io/ceph/ceph:v20, name=jovial_lederberg, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:46 compute-0 systemd[1]: libpod-conmon-54dcef3757939f3b2dc422135842458f115efba7e3c03a8bf6d8215f9aebd966.scope: Deactivated successfully.
Dec 01 20:32:46 compute-0 sudo[91799]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:46 compute-0 sudo[91876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myksjslbiuaajusaryeyiwzwhdqqhzoh ; /usr/bin/python3'
Dec 01 20:32:46 compute-0 sudo[91876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:46 compute-0 python3[91878]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:46 compute-0 podman[91879]: 2025-12-01 20:32:46.931545097 +0000 UTC m=+0.054034143 container create 425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d (image=quay.io/ceph/ceph:v20, name=distracted_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:32:46 compute-0 systemd[1]: Started libpod-conmon-425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d.scope.
Dec 01 20:32:46 compute-0 podman[91879]: 2025-12-01 20:32:46.903547405 +0000 UTC m=+0.026036471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:47 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2aa94e082ebf7d03c74b9376ed972f4d40d805ff16e4a4b55cdb5b09932e99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2aa94e082ebf7d03c74b9376ed972f4d40d805ff16e4a4b55cdb5b09932e99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:47 compute-0 podman[91879]: 2025-12-01 20:32:47.037795882 +0000 UTC m=+0.160284988 container init 425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d (image=quay.io/ceph/ceph:v20, name=distracted_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:32:47 compute-0 podman[91879]: 2025-12-01 20:32:47.045243306 +0000 UTC m=+0.167732352 container start 425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d (image=quay.io/ceph/ceph:v20, name=distracted_burnell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:32:47 compute-0 podman[91879]: 2025-12-01 20:32:47.049662465 +0000 UTC m=+0.172151581 container attach 425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d (image=quay.io/ceph/ceph:v20, name=distracted_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:32:47 compute-0 ceph-mon[75880]: pgmap v56: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:47 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2613164982' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 01 20:32:47 compute-0 ceph-mon[75880]: osdmap e29: 3 total, 3 up, 3 in
Dec 01 20:32:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 01 20:32:47 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/73645736' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 01 20:32:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 01 20:32:48 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/73645736' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 01 20:32:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/73645736' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 01 20:32:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 01 20:32:48 compute-0 distracted_burnell[91894]: enabled application 'rbd' on pool 'images'
Dec 01 20:32:48 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 01 20:32:48 compute-0 systemd[1]: libpod-425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d.scope: Deactivated successfully.
Dec 01 20:32:48 compute-0 podman[91879]: 2025-12-01 20:32:48.51461182 +0000 UTC m=+1.637100836 container died 425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d (image=quay.io/ceph/ceph:v20, name=distracted_burnell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d2aa94e082ebf7d03c74b9376ed972f4d40d805ff16e4a4b55cdb5b09932e99-merged.mount: Deactivated successfully.
Dec 01 20:32:48 compute-0 podman[91879]: 2025-12-01 20:32:48.556332453 +0000 UTC m=+1.678821469 container remove 425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d (image=quay.io/ceph/ceph:v20, name=distracted_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:32:48 compute-0 systemd[1]: libpod-conmon-425e624d8d663bae22200e415eab11d9421d20192bb770c1d3276c02f1d5263d.scope: Deactivated successfully.
Dec 01 20:32:48 compute-0 sudo[91876]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:48 compute-0 sudo[91954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bskgbmmkqhozdlsbzdnjdqjpisttbkvl ; /usr/bin/python3'
Dec 01 20:32:48 compute-0 sudo[91954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:48 compute-0 python3[91956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:48 compute-0 podman[91957]: 2025-12-01 20:32:48.94591877 +0000 UTC m=+0.053061682 container create a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9 (image=quay.io/ceph/ceph:v20, name=agitated_meninsky, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 20:32:48 compute-0 systemd[1]: Started libpod-conmon-a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9.scope.
Dec 01 20:32:49 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76a4110c2077e8821fdc7594419ee65cd25a3ef30404d1455178e7205945f5a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b76a4110c2077e8821fdc7594419ee65cd25a3ef30404d1455178e7205945f5a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:49 compute-0 podman[91957]: 2025-12-01 20:32:48.923872106 +0000 UTC m=+0.031015068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:49 compute-0 podman[91957]: 2025-12-01 20:32:49.019536318 +0000 UTC m=+0.126679260 container init a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9 (image=quay.io/ceph/ceph:v20, name=agitated_meninsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 01 20:32:49 compute-0 podman[91957]: 2025-12-01 20:32:49.026342662 +0000 UTC m=+0.133485574 container start a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9 (image=quay.io/ceph/ceph:v20, name=agitated_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:49 compute-0 podman[91957]: 2025-12-01 20:32:49.029754419 +0000 UTC m=+0.136897411 container attach a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9 (image=quay.io/ceph/ceph:v20, name=agitated_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 01 20:32:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 01 20:32:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/325394187' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 01 20:32:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 01 20:32:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/325394187' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 01 20:32:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 01 20:32:49 compute-0 agitated_meninsky[91972]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 01 20:32:49 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 01 20:32:49 compute-0 ceph-mon[75880]: pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/73645736' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 01 20:32:49 compute-0 ceph-mon[75880]: osdmap e30: 3 total, 3 up, 3 in
Dec 01 20:32:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/325394187' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 01 20:32:49 compute-0 systemd[1]: libpod-a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9.scope: Deactivated successfully.
Dec 01 20:32:49 compute-0 podman[91957]: 2025-12-01 20:32:49.519980685 +0000 UTC m=+0.627123617 container died a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9 (image=quay.io/ceph/ceph:v20, name=agitated_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 01 20:32:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b76a4110c2077e8821fdc7594419ee65cd25a3ef30404d1455178e7205945f5a-merged.mount: Deactivated successfully.
Dec 01 20:32:49 compute-0 podman[91957]: 2025-12-01 20:32:49.556930868 +0000 UTC m=+0.664073790 container remove a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9 (image=quay.io/ceph/ceph:v20, name=agitated_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:32:49 compute-0 systemd[1]: libpod-conmon-a5ebdec3687110954d0951d82ffca5f2de9f37477ad17e39441fd04cdeec95f9.scope: Deactivated successfully.
Dec 01 20:32:49 compute-0 sudo[91954]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:49 compute-0 sudo[92031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hihfuqppqbfminufcqdotthlogzyftrn ; /usr/bin/python3'
Dec 01 20:32:49 compute-0 sudo[92031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:49 compute-0 python3[92033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:49 compute-0 podman[92034]: 2025-12-01 20:32:49.890679547 +0000 UTC m=+0.042233741 container create b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67 (image=quay.io/ceph/ceph:v20, name=festive_ramanujan, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:32:49 compute-0 systemd[1]: Started libpod-conmon-b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67.scope.
Dec 01 20:32:49 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1b98d17fa6a67ace0afe3ade5ed45785049ee40bc10bb579d7b868daa380a49/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1b98d17fa6a67ace0afe3ade5ed45785049ee40bc10bb579d7b868daa380a49/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:49 compute-0 podman[92034]: 2025-12-01 20:32:49.96130303 +0000 UTC m=+0.112857254 container init b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67 (image=quay.io/ceph/ceph:v20, name=festive_ramanujan, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:49 compute-0 podman[92034]: 2025-12-01 20:32:49.96574922 +0000 UTC m=+0.117303434 container start b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67 (image=quay.io/ceph/ceph:v20, name=festive_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Dec 01 20:32:49 compute-0 podman[92034]: 2025-12-01 20:32:49.87014774 +0000 UTC m=+0.021701974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:49 compute-0 podman[92034]: 2025-12-01 20:32:49.968518137 +0000 UTC m=+0.120072341 container attach b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67 (image=quay.io/ceph/ceph:v20, name=festive_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 01 20:32:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2883557880' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 01 20:32:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 01 20:32:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2883557880' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 01 20:32:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 01 20:32:50 compute-0 festive_ramanujan[92049]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 01 20:32:50 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/325394187' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 01 20:32:50 compute-0 ceph-mon[75880]: osdmap e31: 3 total, 3 up, 3 in
Dec 01 20:32:50 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2883557880' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 01 20:32:50 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 01 20:32:50 compute-0 systemd[1]: libpod-b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67.scope: Deactivated successfully.
Dec 01 20:32:50 compute-0 podman[92034]: 2025-12-01 20:32:50.534224449 +0000 UTC m=+0.685778643 container died b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67 (image=quay.io/ceph/ceph:v20, name=festive_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:32:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1b98d17fa6a67ace0afe3ade5ed45785049ee40bc10bb579d7b868daa380a49-merged.mount: Deactivated successfully.
Dec 01 20:32:50 compute-0 podman[92034]: 2025-12-01 20:32:50.571352297 +0000 UTC m=+0.722906501 container remove b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67 (image=quay.io/ceph/ceph:v20, name=festive_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:50 compute-0 systemd[1]: libpod-conmon-b34960b6f278fd535416a174bc97510e52a5098d330f414c278135487770fd67.scope: Deactivated successfully.
Dec 01 20:32:50 compute-0 sudo[92031]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:51 compute-0 ceph-mon[75880]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2883557880' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 01 20:32:51 compute-0 ceph-mon[75880]: osdmap e32: 3 total, 3 up, 3 in
Dec 01 20:32:52 compute-0 python3[92159]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:32:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:52 compute-0 python3[92230]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621172.0030293-36848-2656642350343/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:32:52 compute-0 sudo[92278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbvqnrorqincwoswzbbsfwfqhpgnttjk ; /usr/bin/python3'
Dec 01 20:32:52 compute-0 sudo[92278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:53 compute-0 python3[92280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:53 compute-0 podman[92281]: 2025-12-01 20:32:53.141670996 +0000 UTC m=+0.060508746 container create 28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8 (image=quay.io/ceph/ceph:v20, name=condescending_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 01 20:32:53 compute-0 systemd[1]: Started libpod-conmon-28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8.scope.
Dec 01 20:32:53 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e33216af8ceebc50d768e06637cc53c364d08b4b24585667d089143e3a98f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e33216af8ceebc50d768e06637cc53c364d08b4b24585667d089143e3a98f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e33216af8ceebc50d768e06637cc53c364d08b4b24585667d089143e3a98f6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:53 compute-0 podman[92281]: 2025-12-01 20:32:53.125107224 +0000 UTC m=+0.043944984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:53 compute-0 podman[92281]: 2025-12-01 20:32:53.221745488 +0000 UTC m=+0.140583258 container init 28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8 (image=quay.io/ceph/ceph:v20, name=condescending_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 20:32:53 compute-0 podman[92281]: 2025-12-01 20:32:53.230614277 +0000 UTC m=+0.149452027 container start 28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8 (image=quay.io/ceph/ceph:v20, name=condescending_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:32:53 compute-0 podman[92281]: 2025-12-01 20:32:53.23392449 +0000 UTC m=+0.152762240 container attach 28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8 (image=quay.io/ceph/ceph:v20, name=condescending_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 20:32:53 compute-0 ceph-mon[75880]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:53 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14230 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:32:53 compute-0 ceph-mgr[76174]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 01 20:32:53 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0[75876]: 2025-12-01T20:32:53.726+0000 7facf63f0640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e2 new map
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-01T20:32:53:727206+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-01T20:32:53.726970+0000
                                           modified        2025-12-01T20:32:53.726970+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 01 20:32:53 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 01 20:32:53 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 01 20:32:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 01 20:32:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:53 compute-0 ceph-mgr[76174]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 01 20:32:53 compute-0 systemd[1]: libpod-28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8.scope: Deactivated successfully.
Dec 01 20:32:53 compute-0 podman[92281]: 2025-12-01 20:32:53.770121313 +0000 UTC m=+0.688959103 container died 28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8 (image=quay.io/ceph/ceph:v20, name=condescending_davinci, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3e33216af8ceebc50d768e06637cc53c364d08b4b24585667d089143e3a98f6-merged.mount: Deactivated successfully.
Dec 01 20:32:53 compute-0 sudo[92321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:53 compute-0 sudo[92321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:53 compute-0 sudo[92321]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:53 compute-0 podman[92281]: 2025-12-01 20:32:53.818863078 +0000 UTC m=+0.737700868 container remove 28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8 (image=quay.io/ceph/ceph:v20, name=condescending_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:53 compute-0 systemd[1]: libpod-conmon-28bbdc3ca8dced1b21dea55527a6c09750d197b7828c0d71e57265352af7cdf8.scope: Deactivated successfully.
Dec 01 20:32:53 compute-0 sudo[92278]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:53 compute-0 sudo[92356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:32:53 compute-0 sudo[92356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:53 compute-0 sudo[92404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfkccbuokbcrlfhrmhivnjkyvrwuhoe ; /usr/bin/python3'
Dec 01 20:32:53 compute-0 sudo[92404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:54 compute-0 python3[92406]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:54 compute-0 podman[92422]: 2025-12-01 20:32:54.175644001 +0000 UTC m=+0.049905512 container create 1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591 (image=quay.io/ceph/ceph:v20, name=stoic_mendel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:54 compute-0 systemd[1]: Started libpod-conmon-1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591.scope.
Dec 01 20:32:54 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bae43472ed4bf6db3bfc75e01c303965b666c4f345a15a79c94ae2d308a6df6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bae43472ed4bf6db3bfc75e01c303965b666c4f345a15a79c94ae2d308a6df6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bae43472ed4bf6db3bfc75e01c303965b666c4f345a15a79c94ae2d308a6df6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:54 compute-0 podman[92422]: 2025-12-01 20:32:54.151873783 +0000 UTC m=+0.026135334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:54 compute-0 podman[92422]: 2025-12-01 20:32:54.251663855 +0000 UTC m=+0.125925386 container init 1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591 (image=quay.io/ceph/ceph:v20, name=stoic_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:54 compute-0 podman[92422]: 2025-12-01 20:32:54.257238931 +0000 UTC m=+0.131500442 container start 1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591 (image=quay.io/ceph/ceph:v20, name=stoic_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:32:54 compute-0 podman[92422]: 2025-12-01 20:32:54.259745019 +0000 UTC m=+0.134006550 container attach 1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591 (image=quay.io/ceph/ceph:v20, name=stoic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 20:32:54 compute-0 podman[92466]: 2025-12-01 20:32:54.38044456 +0000 UTC m=+0.118297216 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:54 compute-0 podman[92466]: 2025-12-01 20:32:54.469440792 +0000 UTC m=+0.207293418 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:32:54 compute-0 ceph-mon[75880]: from='client.14230 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:32:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 01 20:32:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 01 20:32:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 01 20:32:54 compute-0 ceph-mon[75880]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 01 20:32:54 compute-0 ceph-mon[75880]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 01 20:32:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 01 20:32:54 compute-0 ceph-mon[75880]: osdmap e33: 3 total, 3 up, 3 in
Dec 01 20:32:54 compute-0 ceph-mon[75880]: fsmap cephfs:0
Dec 01 20:32:54 compute-0 ceph-mon[75880]: Saving service mds.cephfs spec with placement compute-0
Dec 01 20:32:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:54 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14232 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:32:54 compute-0 ceph-mgr[76174]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 01 20:32:54 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 01 20:32:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 01 20:32:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:54 compute-0 stoic_mendel[92453]: Scheduled mds.cephfs update...
Dec 01 20:32:54 compute-0 systemd[1]: libpod-1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591.scope: Deactivated successfully.
Dec 01 20:32:54 compute-0 podman[92422]: 2025-12-01 20:32:54.669279594 +0000 UTC m=+0.543541135 container died 1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591 (image=quay.io/ceph/ceph:v20, name=stoic_mendel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bae43472ed4bf6db3bfc75e01c303965b666c4f345a15a79c94ae2d308a6df6-merged.mount: Deactivated successfully.
Dec 01 20:32:54 compute-0 podman[92422]: 2025-12-01 20:32:54.708825939 +0000 UTC m=+0.583087450 container remove 1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591 (image=quay.io/ceph/ceph:v20, name=stoic_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:32:54 compute-0 systemd[1]: libpod-conmon-1298bfa614bce772f21b4ee1e4e4f81c80d47cc0298d832f2a6c94d9768b4591.scope: Deactivated successfully.
Dec 01 20:32:54 compute-0 sudo[92404]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:54 compute-0 sudo[92356]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:32:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:55 compute-0 sudo[92649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:55 compute-0 sudo[92649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:55 compute-0 sudo[92649]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:55 compute-0 sudo[92700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:32:55 compute-0 sudo[92700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:55 compute-0 sudo[92774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpsqnlkfieavbelhopzaquzyaxmxrxvh ; /usr/bin/python3'
Dec 01 20:32:55 compute-0 sudo[92774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:55 compute-0 python3[92776]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 01 20:32:55 compute-0 sudo[92774]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:55 compute-0 sudo[92867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oidzjxsfttmvznawfwbjygyrhorfttuh ; /usr/bin/python3'
Dec 01 20:32:55 compute-0 sudo[92867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:55 compute-0 ceph-mon[75880]: from='client.14232 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:32:55 compute-0 ceph-mon[75880]: Saving service mds.cephfs spec with placement compute-0
Dec 01 20:32:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:55 compute-0 sudo[92700]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:32:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:32:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:32:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:55 compute-0 python3[92869]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621175.070587-36878-209720398647956/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=2727cc641a79df9d39e1e523028429a32d1fa66b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:32:55 compute-0 sudo[92867]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:55 compute-0 sudo[92882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:55 compute-0 sudo[92882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:55 compute-0 sudo[92882]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:55 compute-0 sudo[92916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:32:55 compute-0 sudo[92916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:56 compute-0 sudo[92979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxinqsizerlklowwhrajcrymdhqnzqv ; /usr/bin/python3'
Dec 01 20:32:56 compute-0 sudo[92979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:56 compute-0 podman[92994]: 2025-12-01 20:32:56.141843429 +0000 UTC m=+0.052688080 container create 2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 20:32:56 compute-0 python3[92983]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:56 compute-0 systemd[1]: Started libpod-conmon-2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e.scope.
Dec 01 20:32:56 compute-0 podman[92994]: 2025-12-01 20:32:56.114741525 +0000 UTC m=+0.025586226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:56 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:56 compute-0 podman[92994]: 2025-12-01 20:32:56.240476394 +0000 UTC m=+0.151321015 container init 2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:56 compute-0 podman[92994]: 2025-12-01 20:32:56.250125568 +0000 UTC m=+0.160970189 container start 2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:56 compute-0 podman[93010]: 2025-12-01 20:32:56.254107464 +0000 UTC m=+0.058021889 container create 6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879 (image=quay.io/ceph/ceph:v20, name=clever_perlman, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 01 20:32:56 compute-0 wonderful_wing[93011]: 167 167
Dec 01 20:32:56 compute-0 systemd[1]: libpod-2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e.scope: Deactivated successfully.
Dec 01 20:32:56 compute-0 podman[92994]: 2025-12-01 20:32:56.263054585 +0000 UTC m=+0.173899226 container attach 2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 20:32:56 compute-0 podman[92994]: 2025-12-01 20:32:56.266683429 +0000 UTC m=+0.177528050 container died 2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 01 20:32:56 compute-0 systemd[1]: Started libpod-conmon-6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879.scope.
Dec 01 20:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-edd05b545a1caf5bfad43c13b20fca78222390e7e4a15cf46e1deec855763db5-merged.mount: Deactivated successfully.
Dec 01 20:32:56 compute-0 podman[92994]: 2025-12-01 20:32:56.310316614 +0000 UTC m=+0.221161235 container remove 2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 01 20:32:56 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:56 compute-0 podman[93010]: 2025-12-01 20:32:56.229265951 +0000 UTC m=+0.033180346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8cd25ca01c4c05b782891c7eb99594243a33ec609eafd84de8a93a2f71904b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8cd25ca01c4c05b782891c7eb99594243a33ec609eafd84de8a93a2f71904b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:56 compute-0 systemd[1]: libpod-conmon-2f77c635b7d1ac2a21c60c3c28033530abc5689ee5aa885ee72e44f761b2747e.scope: Deactivated successfully.
Dec 01 20:32:56 compute-0 podman[93010]: 2025-12-01 20:32:56.337662304 +0000 UTC m=+0.141576719 container init 6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879 (image=quay.io/ceph/ceph:v20, name=clever_perlman, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:56 compute-0 podman[93010]: 2025-12-01 20:32:56.343005612 +0000 UTC m=+0.146920017 container start 6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879 (image=quay.io/ceph/ceph:v20, name=clever_perlman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:32:56 compute-0 podman[93010]: 2025-12-01 20:32:56.346358908 +0000 UTC m=+0.150273303 container attach 6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879 (image=quay.io/ceph/ceph:v20, name=clever_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 01 20:32:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:56 compute-0 podman[93055]: 2025-12-01 20:32:56.504387903 +0000 UTC m=+0.066857896 container create 81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_heisenberg, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:56 compute-0 systemd[1]: Started libpod-conmon-81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a.scope.
Dec 01 20:32:56 compute-0 podman[93055]: 2025-12-01 20:32:56.481778371 +0000 UTC m=+0.044248404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:56 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870726fe8c55186db2dcae8225a67b5114dc2bc837affcd4dd576cfc7b0b3230/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870726fe8c55186db2dcae8225a67b5114dc2bc837affcd4dd576cfc7b0b3230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870726fe8c55186db2dcae8225a67b5114dc2bc837affcd4dd576cfc7b0b3230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870726fe8c55186db2dcae8225a67b5114dc2bc837affcd4dd576cfc7b0b3230/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870726fe8c55186db2dcae8225a67b5114dc2bc837affcd4dd576cfc7b0b3230/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:56 compute-0 podman[93055]: 2025-12-01 20:32:56.623807094 +0000 UTC m=+0.186277087 container init 81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_heisenberg, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 01 20:32:56 compute-0 podman[93055]: 2025-12-01 20:32:56.630782163 +0000 UTC m=+0.193252156 container start 81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:32:56 compute-0 podman[93055]: 2025-12-01 20:32:56.634466449 +0000 UTC m=+0.196936462 container attach 81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:32:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:32:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:32:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:32:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:32:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 01 20:32:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2361598368' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 01 20:32:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2361598368' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 01 20:32:56 compute-0 systemd[1]: libpod-6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879.scope: Deactivated successfully.
Dec 01 20:32:56 compute-0 conmon[93044]: conmon 6d09442f5fce5ae02eb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879.scope/container/memory.events
Dec 01 20:32:56 compute-0 podman[93010]: 2025-12-01 20:32:56.872759332 +0000 UTC m=+0.676673767 container died 6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879 (image=quay.io/ceph/ceph:v20, name=clever_perlman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-be8cd25ca01c4c05b782891c7eb99594243a33ec609eafd84de8a93a2f71904b-merged.mount: Deactivated successfully.
Dec 01 20:32:57 compute-0 podman[93010]: 2025-12-01 20:32:57.000271127 +0000 UTC m=+0.804185532 container remove 6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879 (image=quay.io/ceph/ceph:v20, name=clever_perlman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:32:57 compute-0 systemd[1]: libpod-conmon-6d09442f5fce5ae02eb6d1da3b5002599bfcd9cc45ff323e2381f03a9d653879.scope: Deactivated successfully.
Dec 01 20:32:57 compute-0 sudo[92979]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:57 compute-0 exciting_heisenberg[93088]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:32:57 compute-0 exciting_heisenberg[93088]: --> All data devices are unavailable
Dec 01 20:32:57 compute-0 systemd[1]: libpod-81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a.scope: Deactivated successfully.
Dec 01 20:32:57 compute-0 podman[93055]: 2025-12-01 20:32:57.134244705 +0000 UTC m=+0.696714748 container died 81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-870726fe8c55186db2dcae8225a67b5114dc2bc837affcd4dd576cfc7b0b3230-merged.mount: Deactivated successfully.
Dec 01 20:32:57 compute-0 podman[93055]: 2025-12-01 20:32:57.175429862 +0000 UTC m=+0.737899855 container remove 81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:32:57 compute-0 systemd[1]: libpod-conmon-81b911ce13454ed504462d6f240258a37487e7f41be5b023a5d6ad5a8718ed1a.scope: Deactivated successfully.
Dec 01 20:32:57 compute-0 sudo[92916]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:57 compute-0 sudo[93136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:57 compute-0 sudo[93136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:57 compute-0 sudo[93136]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:57 compute-0 sudo[93161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:32:57 compute-0 sudo[93161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:57 compute-0 sudo[93209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzeoshfhcvndorlkcccwjdtfanghwvap ; /usr/bin/python3'
Dec 01 20:32:57 compute-0 sudo[93209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:57 compute-0 podman[93224]: 2025-12-01 20:32:57.647500035 +0000 UTC m=+0.056504850 container create bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec 01 20:32:57 compute-0 python3[93211]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:57 compute-0 systemd[1]: Started libpod-conmon-bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8.scope.
Dec 01 20:32:57 compute-0 podman[93224]: 2025-12-01 20:32:57.618416899 +0000 UTC m=+0.027421744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:57 compute-0 ceph-mon[75880]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:57 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2361598368' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 01 20:32:57 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2361598368' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 01 20:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:57 compute-0 podman[93242]: 2025-12-01 20:32:57.732517723 +0000 UTC m=+0.051575706 container create dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9 (image=quay.io/ceph/ceph:v20, name=wonderful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:32:57 compute-0 podman[93224]: 2025-12-01 20:32:57.737665194 +0000 UTC m=+0.146670069 container init bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:32:57 compute-0 podman[93224]: 2025-12-01 20:32:57.749813876 +0000 UTC m=+0.158818691 container start bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:32:57 compute-0 laughing_edison[93249]: 167 167
Dec 01 20:32:57 compute-0 podman[93224]: 2025-12-01 20:32:57.753683978 +0000 UTC m=+0.162688793 container attach bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_edison, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:57 compute-0 systemd[1]: libpod-bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8.scope: Deactivated successfully.
Dec 01 20:32:57 compute-0 podman[93224]: 2025-12-01 20:32:57.754923528 +0000 UTC m=+0.163928343 container died bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:57 compute-0 systemd[1]: Started libpod-conmon-dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9.scope.
Dec 01 20:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0549e147b4c70dd213f50cd69f5095124429cb13488f0d5e5b4a021704ac5316-merged.mount: Deactivated successfully.
Dec 01 20:32:57 compute-0 podman[93224]: 2025-12-01 20:32:57.790847499 +0000 UTC m=+0.199852274 container remove bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_edison, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:32:57 compute-0 podman[93242]: 2025-12-01 20:32:57.705887063 +0000 UTC m=+0.024945136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1d99475ddf37d562214c8a924c4a9f915214afc99fcafe11149d4c4211e375/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1d99475ddf37d562214c8a924c4a9f915214afc99fcafe11149d4c4211e375/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:57 compute-0 systemd[1]: libpod-conmon-bf2cc38db9aba6c7c424379e5ca016f3a39ee959cb714b317acfd24a2e2066f8.scope: Deactivated successfully.
Dec 01 20:32:57 compute-0 podman[93242]: 2025-12-01 20:32:57.819662405 +0000 UTC m=+0.138720448 container init dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9 (image=quay.io/ceph/ceph:v20, name=wonderful_noether, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 01 20:32:57 compute-0 podman[93242]: 2025-12-01 20:32:57.829802915 +0000 UTC m=+0.148860928 container start dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9 (image=quay.io/ceph/ceph:v20, name=wonderful_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 01 20:32:57 compute-0 podman[93242]: 2025-12-01 20:32:57.83439741 +0000 UTC m=+0.153455433 container attach dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9 (image=quay.io/ceph/ceph:v20, name=wonderful_noether, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:32:58 compute-0 podman[93287]: 2025-12-01 20:32:57.956792063 +0000 UTC m=+0.023502220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:32:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 01 20:32:58 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637041838' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:32:58 compute-0 wonderful_noether[93265]: 
Dec 01 20:32:58 compute-0 wonderful_noether[93265]: {"fsid":"dcf60a89-bba0-58b0-a1bf-d4bde723201b","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":104,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1764621152,"num_in_osds":3,"osd_in_since":1764621128,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83881984,"bytes_avail":64328044544,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-01T20:32:53:727206+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-01T20:32:34.431656+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 01 20:32:58 compute-0 podman[93287]: 2025-12-01 20:32:58.942967754 +0000 UTC m=+1.009677871 container create d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 01 20:32:58 compute-0 podman[93242]: 2025-12-01 20:32:58.973678561 +0000 UTC m=+1.292736564 container died dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9 (image=quay.io/ceph/ceph:v20, name=wonderful_noether, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 20:32:58 compute-0 systemd[1]: libpod-dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9.scope: Deactivated successfully.
Dec 01 20:32:58 compute-0 systemd[1]: Started libpod-conmon-d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b.scope.
Dec 01 20:32:59 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3637041838' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf1d99475ddf37d562214c8a924c4a9f915214afc99fcafe11149d4c4211e375-merged.mount: Deactivated successfully.
Dec 01 20:32:59 compute-0 podman[93242]: 2025-12-01 20:32:59.015022132 +0000 UTC m=+1.334080115 container remove dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9 (image=quay.io/ceph/ceph:v20, name=wonderful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:32:59 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fefda27b6b691a218d951494c9e53ab5d46a48a6f0b12c7e282f1972b9660a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fefda27b6b691a218d951494c9e53ab5d46a48a6f0b12c7e282f1972b9660a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fefda27b6b691a218d951494c9e53ab5d46a48a6f0b12c7e282f1972b9660a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9fefda27b6b691a218d951494c9e53ab5d46a48a6f0b12c7e282f1972b9660a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:59 compute-0 systemd[1]: libpod-conmon-dcbfbed75d525be0284902bf1f0ac05d61b79db76957ee4592bb4bf4514c75c9.scope: Deactivated successfully.
Dec 01 20:32:59 compute-0 sudo[93209]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:59 compute-0 podman[93287]: 2025-12-01 20:32:59.035351403 +0000 UTC m=+1.102061560 container init d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 01 20:32:59 compute-0 podman[93287]: 2025-12-01 20:32:59.04416505 +0000 UTC m=+1.110875177 container start d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rubin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:32:59 compute-0 podman[93287]: 2025-12-01 20:32:59.04734882 +0000 UTC m=+1.114058967 container attach d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rubin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:32:59 compute-0 sudo[93364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhuunqogvsmqxqlwielhwtdrivnzbuml ; /usr/bin/python3'
Dec 01 20:32:59 compute-0 sudo[93364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:32:59 compute-0 python3[93366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:32:59 compute-0 sad_rubin[93328]: {
Dec 01 20:32:59 compute-0 sad_rubin[93328]:     "0": [
Dec 01 20:32:59 compute-0 sad_rubin[93328]:         {
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "devices": [
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "/dev/loop3"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             ],
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_name": "ceph_lv0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_size": "21470642176",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "name": "ceph_lv0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "tags": {
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.crush_device_class": "",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.encrypted": "0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osd_id": "0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.type": "block",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.vdo": "0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.with_tpm": "0"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             },
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "type": "block",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "vg_name": "ceph_vg0"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:         }
Dec 01 20:32:59 compute-0 sad_rubin[93328]:     ],
Dec 01 20:32:59 compute-0 sad_rubin[93328]:     "1": [
Dec 01 20:32:59 compute-0 sad_rubin[93328]:         {
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "devices": [
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "/dev/loop4"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             ],
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_name": "ceph_lv1",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_size": "21470642176",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "name": "ceph_lv1",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "tags": {
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.crush_device_class": "",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.encrypted": "0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osd_id": "1",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.type": "block",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.vdo": "0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.with_tpm": "0"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             },
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "type": "block",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "vg_name": "ceph_vg1"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:         }
Dec 01 20:32:59 compute-0 sad_rubin[93328]:     ],
Dec 01 20:32:59 compute-0 sad_rubin[93328]:     "2": [
Dec 01 20:32:59 compute-0 sad_rubin[93328]:         {
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "devices": [
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "/dev/loop5"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             ],
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_name": "ceph_lv2",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_size": "21470642176",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "name": "ceph_lv2",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "tags": {
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.cluster_name": "ceph",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.crush_device_class": "",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.encrypted": "0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.objectstore": "bluestore",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osd_id": "2",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.type": "block",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.vdo": "0",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:                 "ceph.with_tpm": "0"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             },
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "type": "block",
Dec 01 20:32:59 compute-0 sad_rubin[93328]:             "vg_name": "ceph_vg2"
Dec 01 20:32:59 compute-0 sad_rubin[93328]:         }
Dec 01 20:32:59 compute-0 sad_rubin[93328]:     ]
Dec 01 20:32:59 compute-0 sad_rubin[93328]: }
Dec 01 20:32:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:32:59 compute-0 systemd[1]: libpod-d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b.scope: Deactivated successfully.
Dec 01 20:32:59 compute-0 podman[93287]: 2025-12-01 20:32:59.398995432 +0000 UTC m=+1.465705559 container died d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rubin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9fefda27b6b691a218d951494c9e53ab5d46a48a6f0b12c7e282f1972b9660a-merged.mount: Deactivated successfully.
Dec 01 20:32:59 compute-0 podman[93287]: 2025-12-01 20:32:59.438364021 +0000 UTC m=+1.505074148 container remove d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_rubin, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:32:59 compute-0 systemd[1]: libpod-conmon-d2994726446986f263178068b2fcc9fffb3228acc6560e3cdcf7704a85c5031b.scope: Deactivated successfully.
Dec 01 20:32:59 compute-0 podman[93371]: 2025-12-01 20:32:59.458352391 +0000 UTC m=+0.082318483 container create b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df (image=quay.io/ceph/ceph:v20, name=boring_cori, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 01 20:32:59 compute-0 sudo[93161]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:59 compute-0 systemd[1]: Started libpod-conmon-b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df.scope.
Dec 01 20:32:59 compute-0 podman[93371]: 2025-12-01 20:32:59.424450333 +0000 UTC m=+0.048416495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:32:59 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fae3aeab5a05c061b449eee0d7136a94c0f11a32e5902f7d35604ef6c93c54f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fae3aeab5a05c061b449eee0d7136a94c0f11a32e5902f7d35604ef6c93c54f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:32:59 compute-0 sudo[93398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:32:59 compute-0 sudo[93398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:59 compute-0 sudo[93398]: pam_unix(sudo:session): session closed for user root
Dec 01 20:32:59 compute-0 podman[93371]: 2025-12-01 20:32:59.568451858 +0000 UTC m=+0.192417980 container init b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df (image=quay.io/ceph/ceph:v20, name=boring_cori, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:32:59 compute-0 podman[93371]: 2025-12-01 20:32:59.576806771 +0000 UTC m=+0.200772863 container start b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df (image=quay.io/ceph/ceph:v20, name=boring_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 01 20:32:59 compute-0 sudo[93426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:32:59 compute-0 sudo[93426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:32:59 compute-0 podman[93371]: 2025-12-01 20:32:59.62665691 +0000 UTC m=+0.250623082 container attach b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df (image=quay.io/ceph/ceph:v20, name=boring_cori, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:32:59 compute-0 podman[93483]: 2025-12-01 20:32:59.899235753 +0000 UTC m=+0.043907754 container create 637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:32:59 compute-0 systemd[1]: Started libpod-conmon-637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9.scope.
Dec 01 20:32:59 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:32:59 compute-0 podman[93483]: 2025-12-01 20:32:59.880155711 +0000 UTC m=+0.024827762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:32:59 compute-0 podman[93483]: 2025-12-01 20:32:59.98840792 +0000 UTC m=+0.133080031 container init 637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 20:32:59 compute-0 podman[93483]: 2025-12-01 20:32:59.999386506 +0000 UTC m=+0.144058517 container start 637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 01 20:33:00 compute-0 great_allen[93500]: 167 167
Dec 01 20:33:00 compute-0 systemd[1]: libpod-637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9.scope: Deactivated successfully.
Dec 01 20:33:00 compute-0 ceph-mon[75880]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:00 compute-0 podman[93483]: 2025-12-01 20:33:00.006290453 +0000 UTC m=+0.150962464 container attach 637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:33:00 compute-0 podman[93483]: 2025-12-01 20:33:00.006715727 +0000 UTC m=+0.151387738 container died 637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfc86d0409df90224c92c018b30f1cddc763470039fe4a16594166e670911f2-merged.mount: Deactivated successfully.
Dec 01 20:33:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 01 20:33:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867306383' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:33:00 compute-0 podman[93483]: 2025-12-01 20:33:00.05479558 +0000 UTC m=+0.199467581 container remove 637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:33:00 compute-0 boring_cori[93401]: 
Dec 01 20:33:00 compute-0 boring_cori[93401]: {"epoch":1,"fsid":"dcf60a89-bba0-58b0-a1bf-d4bde723201b","modified":"2025-12-01T20:31:09.927398Z","created":"2025-12-01T20:31:09.927398Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 01 20:33:00 compute-0 boring_cori[93401]: dumped monmap epoch 1
Dec 01 20:33:00 compute-0 systemd[1]: libpod-conmon-637bbf0ecea7d6d6ba6e95790c28209349113a3617d8794f8feb1b4794743bc9.scope: Deactivated successfully.
Dec 01 20:33:00 compute-0 systemd[1]: libpod-b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df.scope: Deactivated successfully.
Dec 01 20:33:00 compute-0 podman[93522]: 2025-12-01 20:33:00.113131927 +0000 UTC m=+0.022190520 container died b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df (image=quay.io/ceph/ceph:v20, name=boring_cori, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fae3aeab5a05c061b449eee0d7136a94c0f11a32e5902f7d35604ef6c93c54f-merged.mount: Deactivated successfully.
Dec 01 20:33:00 compute-0 podman[93522]: 2025-12-01 20:33:00.165062103 +0000 UTC m=+0.074120626 container remove b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df (image=quay.io/ceph/ceph:v20, name=boring_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:33:00 compute-0 systemd[1]: libpod-conmon-b70fca5f74ce45ec1bd20ec3ba5355655f3ba1df0d92205aed416ef77fd9b8df.scope: Deactivated successfully.
Dec 01 20:33:00 compute-0 sudo[93364]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:00 compute-0 podman[93542]: 2025-12-01 20:33:00.251751872 +0000 UTC m=+0.047720703 container create f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:33:00 compute-0 systemd[1]: Started libpod-conmon-f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815.scope.
Dec 01 20:33:00 compute-0 podman[93542]: 2025-12-01 20:33:00.226799707 +0000 UTC m=+0.022768588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5ac963fc17c3e4dc4e83bb8bf6f3f4d5ea8c6bab2a710e06effcd98fffb81c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5ac963fc17c3e4dc4e83bb8bf6f3f4d5ea8c6bab2a710e06effcd98fffb81c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5ac963fc17c3e4dc4e83bb8bf6f3f4d5ea8c6bab2a710e06effcd98fffb81c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5ac963fc17c3e4dc4e83bb8bf6f3f4d5ea8c6bab2a710e06effcd98fffb81c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:00 compute-0 podman[93542]: 2025-12-01 20:33:00.340267279 +0000 UTC m=+0.136236120 container init f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:33:00 compute-0 podman[93542]: 2025-12-01 20:33:00.353545306 +0000 UTC m=+0.149514167 container start f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:33:00 compute-0 podman[93542]: 2025-12-01 20:33:00.358904095 +0000 UTC m=+0.154872956 container attach f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bouman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:33:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:00 compute-0 sudo[93596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abshwgtasvelspmrylnlxitqrospzxcu ; /usr/bin/python3'
Dec 01 20:33:00 compute-0 sudo[93596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:00 compute-0 python3[93598]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:00 compute-0 podman[93620]: 2025-12-01 20:33:00.846810237 +0000 UTC m=+0.049665604 container create fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d (image=quay.io/ceph/ceph:v20, name=keen_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:33:00 compute-0 systemd[1]: Started libpod-conmon-fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d.scope.
Dec 01 20:33:00 compute-0 podman[93620]: 2025-12-01 20:33:00.821917894 +0000 UTC m=+0.024773291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f96863d9446aef475223cc8a96ffe2d5e47822f984a0cd863b6f1bd2dec907/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f96863d9446aef475223cc8a96ffe2d5e47822f984a0cd863b6f1bd2dec907/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:00 compute-0 podman[93620]: 2025-12-01 20:33:00.934960283 +0000 UTC m=+0.137815710 container init fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d (image=quay.io/ceph/ceph:v20, name=keen_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 01 20:33:00 compute-0 podman[93620]: 2025-12-01 20:33:00.943133141 +0000 UTC m=+0.145988508 container start fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d (image=quay.io/ceph/ceph:v20, name=keen_gould, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:33:00 compute-0 podman[93620]: 2025-12-01 20:33:00.946210388 +0000 UTC m=+0.149065805 container attach fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d (image=quay.io/ceph/ceph:v20, name=keen_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:33:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1867306383' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:33:01 compute-0 lvm[93703]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:33:01 compute-0 lvm[93703]: VG ceph_vg2 finished
Dec 01 20:33:01 compute-0 lvm[93699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:33:01 compute-0 lvm[93699]: VG ceph_vg0 finished
Dec 01 20:33:01 compute-0 lvm[93701]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:33:01 compute-0 lvm[93701]: VG ceph_vg1 finished
Dec 01 20:33:01 compute-0 pensive_bouman[93558]: {}
Dec 01 20:33:01 compute-0 systemd[1]: libpod-f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815.scope: Deactivated successfully.
Dec 01 20:33:01 compute-0 podman[93542]: 2025-12-01 20:33:01.217167489 +0000 UTC m=+1.013136350 container died f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:33:01 compute-0 systemd[1]: libpod-f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815.scope: Consumed 1.392s CPU time.
Dec 01 20:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f5ac963fc17c3e4dc4e83bb8bf6f3f4d5ea8c6bab2a710e06effcd98fffb81c-merged.mount: Deactivated successfully.
Dec 01 20:33:01 compute-0 podman[93542]: 2025-12-01 20:33:01.272820191 +0000 UTC m=+1.068789042 container remove f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bouman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:01 compute-0 systemd[1]: libpod-conmon-f84a0d2fb1c13e11d9196335a0f3077ef01b958f893be8c492d2e59e4f0e9815.scope: Deactivated successfully.
Dec 01 20:33:01 compute-0 sudo[93426]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:33:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:33:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:01 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev 84ba2d4a-177b-4eb9-a957-5f21bf61ce7c (Updating mds.cephfs deployment (+1 -> 1))
Dec 01 20:33:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pstuwl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 01 20:33:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pstuwl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 01 20:33:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pstuwl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 01 20:33:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:33:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:01 compute-0 ceph-mgr[76174]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.pstuwl on compute-0
Dec 01 20:33:01 compute-0 ceph-mgr[76174]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.pstuwl on compute-0
Dec 01 20:33:01 compute-0 sudo[93720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:01 compute-0 sudo[93720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:01 compute-0 sudo[93720]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 01 20:33:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2654139236' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 01 20:33:01 compute-0 keen_gould[93660]: [client.openstack]
Dec 01 20:33:01 compute-0 keen_gould[93660]:         key = AQDp+i1pAAAAABAApJ092b/18HbtI+y6dgTdfg==
Dec 01 20:33:01 compute-0 keen_gould[93660]:         caps mgr = "allow *"
Dec 01 20:33:01 compute-0 keen_gould[93660]:         caps mon = "profile rbd"
Dec 01 20:33:01 compute-0 keen_gould[93660]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 01 20:33:01 compute-0 systemd[1]: libpod-fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d.scope: Deactivated successfully.
Dec 01 20:33:01 compute-0 podman[93620]: 2025-12-01 20:33:01.486145588 +0000 UTC m=+0.689000975 container died fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d (image=quay.io/ceph/ceph:v20, name=keen_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:33:01 compute-0 sudo[93745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:33:01 compute-0 sudo[93745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-24f96863d9446aef475223cc8a96ffe2d5e47822f984a0cd863b6f1bd2dec907-merged.mount: Deactivated successfully.
Dec 01 20:33:01 compute-0 podman[93620]: 2025-12-01 20:33:01.554052755 +0000 UTC m=+0.756908162 container remove fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d (image=quay.io/ceph/ceph:v20, name=keen_gould, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 01 20:33:01 compute-0 systemd[1]: libpod-conmon-fa24d524408ae59b1508edef23f6a1c4e386f2668835f34230217004563f4e7d.scope: Deactivated successfully.
Dec 01 20:33:01 compute-0 sudo[93596]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:02 compute-0 ceph-mon[75880]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pstuwl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 01 20:33:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pstuwl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 01 20:33:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2654139236' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 01 20:33:02 compute-0 podman[93825]: 2025-12-01 20:33:02.060426889 +0000 UTC m=+0.067941661 container create 8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wright, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:02 compute-0 systemd[1]: Started libpod-conmon-8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029.scope.
Dec 01 20:33:02 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:02 compute-0 podman[93825]: 2025-12-01 20:33:02.03154385 +0000 UTC m=+0.039058711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:02 compute-0 podman[93825]: 2025-12-01 20:33:02.144813836 +0000 UTC m=+0.152328657 container init 8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:33:02 compute-0 podman[93825]: 2025-12-01 20:33:02.155903346 +0000 UTC m=+0.163418137 container start 8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wright, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:33:02 compute-0 podman[93825]: 2025-12-01 20:33:02.159754436 +0000 UTC m=+0.167269237 container attach 8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wright, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:33:02 compute-0 sad_wright[93841]: 167 167
Dec 01 20:33:02 compute-0 systemd[1]: libpod-8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029.scope: Deactivated successfully.
Dec 01 20:33:02 compute-0 podman[93825]: 2025-12-01 20:33:02.164745083 +0000 UTC m=+0.172259874 container died 8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wright, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-64dd7b37e8434f28d519d39cc5f3c3707fedd56ab3870368a7017f7d27c1f4d6-merged.mount: Deactivated successfully.
Dec 01 20:33:02 compute-0 podman[93825]: 2025-12-01 20:33:02.216974868 +0000 UTC m=+0.224489649 container remove 8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:33:02 compute-0 systemd[1]: libpod-conmon-8a0367a14d825e545dc9ce1c3bcc542cf6948b283e13ba3c35d2e319c6e76029.scope: Deactivated successfully.
Dec 01 20:33:02 compute-0 systemd[1]: Reloading.
Dec 01 20:33:02 compute-0 systemd-rc-local-generator[93881]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:33:02 compute-0 systemd-sysv-generator[93887]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:33:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:02 compute-0 systemd[1]: Reloading.
Dec 01 20:33:02 compute-0 systemd-rc-local-generator[94005]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:33:02 compute-0 systemd-sysv-generator[94009]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:33:02 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.pstuwl for dcf60a89-bba0-58b0-a1bf-d4bde723201b...
Dec 01 20:33:02 compute-0 sudo[94084]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzirkrfuiqdnmaxueaagycqaqzsvrmen ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621182.5000603-36950-24085773743330/async_wrapper.py j983427903858 30 /home/zuul/.ansible/tmp/ansible-tmp-1764621182.5000603-36950-24085773743330/AnsiballZ_command.py _'
Dec 01 20:33:02 compute-0 sudo[94084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:03 compute-0 ceph-mon[75880]: Deploying daemon mds.cephfs.compute-0.pstuwl on compute-0
Dec 01 20:33:03 compute-0 ansible-async_wrapper.py[94096]: Invoked with j983427903858 30 /home/zuul/.ansible/tmp/ansible-tmp-1764621182.5000603-36950-24085773743330/AnsiballZ_command.py _
Dec 01 20:33:03 compute-0 ansible-async_wrapper.py[94139]: Starting module and watcher
Dec 01 20:33:03 compute-0 ansible-async_wrapper.py[94139]: Start watching 94140 (30)
Dec 01 20:33:03 compute-0 ansible-async_wrapper.py[94140]: Start module (94140)
Dec 01 20:33:03 compute-0 ansible-async_wrapper.py[94096]: Return async_wrapper task started.
Dec 01 20:33:03 compute-0 sudo[94084]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:03 compute-0 podman[94131]: 2025-12-01 20:33:03.210945754 +0000 UTC m=+0.071541724 container create 680fbbfc21d383a03caa5debcbeb5989b5687c5bfcc80c487f7157e7415dfb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mds-cephfs-compute-0-pstuwl, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:33:03 compute-0 podman[94131]: 2025-12-01 20:33:03.174657652 +0000 UTC m=+0.035253662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da4c490350a309e5ae96fdfa8c070e3f41feebf54a77c4c68d2ee4f60851b96b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da4c490350a309e5ae96fdfa8c070e3f41feebf54a77c4c68d2ee4f60851b96b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da4c490350a309e5ae96fdfa8c070e3f41feebf54a77c4c68d2ee4f60851b96b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da4c490350a309e5ae96fdfa8c070e3f41feebf54a77c4c68d2ee4f60851b96b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.pstuwl supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:03 compute-0 podman[94131]: 2025-12-01 20:33:03.297576752 +0000 UTC m=+0.158172762 container init 680fbbfc21d383a03caa5debcbeb5989b5687c5bfcc80c487f7157e7415dfb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mds-cephfs-compute-0-pstuwl, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 20:33:03 compute-0 podman[94131]: 2025-12-01 20:33:03.306632006 +0000 UTC m=+0.167227966 container start 680fbbfc21d383a03caa5debcbeb5989b5687c5bfcc80c487f7157e7415dfb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mds-cephfs-compute-0-pstuwl, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:33:03 compute-0 bash[94131]: 680fbbfc21d383a03caa5debcbeb5989b5687c5bfcc80c487f7157e7415dfb7b
Dec 01 20:33:03 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.pstuwl for dcf60a89-bba0-58b0-a1bf-d4bde723201b.
Dec 01 20:33:03 compute-0 python3[94144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:03 compute-0 ceph-mds[94156]: set uid:gid to 167:167 (ceph:ceph)
Dec 01 20:33:03 compute-0 ceph-mds[94156]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec 01 20:33:03 compute-0 ceph-mds[94156]: main not setting numa affinity
Dec 01 20:33:03 compute-0 ceph-mds[94156]: pidfile_write: ignore empty --pid-file
Dec 01 20:33:03 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mds-cephfs-compute-0-pstuwl[94152]: starting mds.cephfs.compute-0.pstuwl at 
Dec 01 20:33:03 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl Updating MDS map to version 2 from mon.0
Dec 01 20:33:03 compute-0 sudo[93745]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:33:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:33:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 01 20:33:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev 84ba2d4a-177b-4eb9-a957-5f21bf61ce7c (Updating mds.cephfs deployment (+1 -> 1))
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event 84ba2d4a-177b-4eb9-a957-5f21bf61ce7c (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Dec 01 20:33:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 01 20:33:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 01 20:33:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:03 compute-0 podman[94157]: 2025-12-01 20:33:03.431082735 +0000 UTC m=+0.070295634 container create ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608 (image=quay.io/ceph/ceph:v20, name=sad_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:33:03 compute-0 systemd[1]: Started libpod-conmon-ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608.scope.
Dec 01 20:33:03 compute-0 podman[94157]: 2025-12-01 20:33:03.406270254 +0000 UTC m=+0.045483243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:03 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:03 compute-0 sudo[94188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b674fa03b0afb6381236c2952090191feb18721fa15834101ecd73a4d9e28c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:03 compute-0 sudo[94188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b674fa03b0afb6381236c2952090191feb18721fa15834101ecd73a4d9e28c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:03 compute-0 sudo[94188]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:03 compute-0 podman[94157]: 2025-12-01 20:33:03.528873074 +0000 UTC m=+0.168086003 container init ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608 (image=quay.io/ceph/ceph:v20, name=sad_bose, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:03 compute-0 podman[94157]: 2025-12-01 20:33:03.538079744 +0000 UTC m=+0.177292633 container start ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608 (image=quay.io/ceph/ceph:v20, name=sad_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 01 20:33:03 compute-0 podman[94157]: 2025-12-01 20:33:03.541615826 +0000 UTC m=+0.180828755 container attach ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608 (image=quay.io/ceph/ceph:v20, name=sad_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:33:03 compute-0 sudo[94219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:03 compute-0 sudo[94219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:03 compute-0 sudo[94219]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:03 compute-0 sudo[94245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:33:03 compute-0 sudo[94245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:03 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:03 compute-0 sad_bose[94214]: 
Dec 01 20:33:03 compute-0 sad_bose[94214]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 01 20:33:03 compute-0 systemd[1]: libpod-ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608.scope: Deactivated successfully.
Dec 01 20:33:03 compute-0 podman[94157]: 2025-12-01 20:33:03.981867527 +0000 UTC m=+0.621080456 container died ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608 (image=quay.io/ceph/ceph:v20, name=sad_bose, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b674fa03b0afb6381236c2952090191feb18721fa15834101ecd73a4d9e28c-merged.mount: Deactivated successfully.
Dec 01 20:33:04 compute-0 podman[94157]: 2025-12-01 20:33:04.040313197 +0000 UTC m=+0.679526126 container remove ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608 (image=quay.io/ceph/ceph:v20, name=sad_bose, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e3 new map
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-01T20:33:04:039724+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-01T20:32:53.726970+0000
                                           modified        2025-12-01T20:32:53.726970+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.pstuwl{-1:14242} state up:standby seq 1 addr [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] compat {c=[1],r=[1],i=[1fff]}]
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl Updating MDS map to version 3 from mon.0
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl Monitors have assigned me to become a standby
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] up:boot
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] as mds.0
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.pstuwl assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.pstuwl"} v 0)
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.pstuwl"} : dispatch
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e3 all = 0
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e4 new map
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-01T20:33:04:048520+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-01T20:32:53.726970+0000
                                           modified        2025-12-01T20:33:04.048511+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14242}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.pstuwl{0:14242} state up:creating seq 1 addr [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 01 20:33:04 compute-0 ceph-mon[75880]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl Updating MDS map to version 4 from mon.0
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x1
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x100
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x600
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x601
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x602
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x603
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x604
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x605
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x606
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x607
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x608
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.cache creating system inode with ino:0x609
Dec 01 20:33:04 compute-0 systemd[1]: libpod-conmon-ac1775c6be536fd3e4a6e8e769685215f71931a1cd7718c99195ef2e6186a608.scope: Deactivated successfully.
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.pstuwl=up:creating}
Dec 01 20:33:04 compute-0 ansible-async_wrapper.py[94140]: Module complete (94140)
Dec 01 20:33:04 compute-0 ceph-mds[94156]: mds.0.4 creating_done
Dec 01 20:33:04 compute-0 ceph-mon[75880]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.pstuwl is now active in filesystem cephfs as rank 0
Dec 01 20:33:04 compute-0 podman[94358]: 2025-12-01 20:33:04.217365132 +0000 UTC m=+0.072925248 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:33:04 compute-0 podman[94358]: 2025-12-01 20:33:04.33353634 +0000 UTC m=+0.189096436 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:33:04 compute-0 sudo[94425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhjwumdzqifkbkcfkhhxqazqnohomxam ; /usr/bin/python3'
Dec 01 20:33:04 compute-0 sudo[94425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:04 compute-0 sshd-session[94289]: Received disconnect from 80.94.93.119 port 15092:11:  [preauth]
Dec 01 20:33:04 compute-0 sshd-session[94289]: Disconnected from authenticating user root 80.94.93.119 port 15092 [preauth]
Dec 01 20:33:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:04 compute-0 python3[94434]: ansible-ansible.legacy.async_status Invoked with jid=j983427903858.94096 mode=status _async_dir=/root/.ansible_async
Dec 01 20:33:04 compute-0 sudo[94425]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:04 compute-0 sudo[94550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwvbvtftpjytxynboaukkynwrsycrnm ; /usr/bin/python3'
Dec 01 20:33:04 compute-0 sudo[94550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:04 compute-0 python3[94556]: ansible-ansible.legacy.async_status Invoked with jid=j983427903858.94096 mode=cleanup _async_dir=/root/.ansible_async
Dec 01 20:33:04 compute-0 sudo[94550]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:05 compute-0 ceph-mon[75880]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mds.? [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] up:boot
Dec 01 20:33:05 compute-0 ceph-mon[75880]: daemon mds.cephfs.compute-0.pstuwl assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: Cluster is now healthy
Dec 01 20:33:05 compute-0 ceph-mon[75880]: fsmap cephfs:0 1 up:standby
Dec 01 20:33:05 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.pstuwl"} : dispatch
Dec 01 20:33:05 compute-0 ceph-mon[75880]: fsmap cephfs:1 {0=cephfs.compute-0.pstuwl=up:creating}
Dec 01 20:33:05 compute-0 ceph-mon[75880]: daemon mds.cephfs.compute-0.pstuwl is now active in filesystem cephfs as rank 0
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e5 new map
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-01T20:33:05:056385+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-01T20:32:53.726970+0000
                                           modified        2025-12-01T20:33:05.056382+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14242}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14242 members: 14242
                                           [mds.cephfs.compute-0.pstuwl{0:14242} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 01 20:33:05 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl Updating MDS map to version 5 from mon.0
Dec 01 20:33:05 compute-0 ceph-mds[94156]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 01 20:33:05 compute-0 ceph-mds[94156]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec 01 20:33:05 compute-0 ceph-mds[94156]: mds.0.4 recovery_done -- successful recovery!
Dec 01 20:33:05 compute-0 ceph-mds[94156]: mds.0.4 active_start
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] up:active
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.pstuwl=up:active}
Dec 01 20:33:05 compute-0 sudo[94245]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:33:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:33:05 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:05 compute-0 sudo[94629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:05 compute-0 sudo[94629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:05 compute-0 sudo[94629]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:05 compute-0 sudo[94677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wujouagwwnicqvxpcaqpxlmzdpgrwqej ; /usr/bin/python3'
Dec 01 20:33:05 compute-0 sudo[94677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:05 compute-0 sudo[94678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:33:05 compute-0 sudo[94678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:05 compute-0 python3[94686]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:05 compute-0 podman[94705]: 2025-12-01 20:33:05.465154839 +0000 UTC m=+0.041868949 container create 0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:33:05 compute-0 systemd[1]: Started libpod-conmon-0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46.scope.
Dec 01 20:33:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2bc8ca0481b2f3e0472790cf892cc34e470cc986acba4dd72a67883146cf09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2bc8ca0481b2f3e0472790cf892cc34e470cc986acba4dd72a67883146cf09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:05 compute-0 podman[94705]: 2025-12-01 20:33:05.544107275 +0000 UTC m=+0.120821445 container init 0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46 (image=quay.io/ceph/ceph:v20, name=zen_bartik, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 20:33:05 compute-0 podman[94705]: 2025-12-01 20:33:05.448255647 +0000 UTC m=+0.024969777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:05 compute-0 podman[94705]: 2025-12-01 20:33:05.5506082 +0000 UTC m=+0.127322310 container start 0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:33:05 compute-0 podman[94705]: 2025-12-01 20:33:05.556522656 +0000 UTC m=+0.133236806 container attach 0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46 (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:05 compute-0 podman[94734]: 2025-12-01 20:33:05.608515363 +0000 UTC m=+0.056004184 container create 1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 20:33:05 compute-0 systemd[1]: Started libpod-conmon-1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d.scope.
Dec 01 20:33:05 compute-0 podman[94734]: 2025-12-01 20:33:05.579779489 +0000 UTC m=+0.027268370 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:05 compute-0 podman[94734]: 2025-12-01 20:33:05.704819145 +0000 UTC m=+0.152307936 container init 1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:33:05 compute-0 podman[94734]: 2025-12-01 20:33:05.715672087 +0000 UTC m=+0.163160898 container start 1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 01 20:33:05 compute-0 systemd[1]: libpod-1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d.scope: Deactivated successfully.
Dec 01 20:33:05 compute-0 quirky_carver[94754]: 167 167
Dec 01 20:33:05 compute-0 podman[94734]: 2025-12-01 20:33:05.719498808 +0000 UTC m=+0.166987639 container attach 1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:33:05 compute-0 conmon[94754]: conmon 1c210bec56f86eb4be9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d.scope/container/memory.events
Dec 01 20:33:05 compute-0 podman[94734]: 2025-12-01 20:33:05.720877281 +0000 UTC m=+0.168366102 container died 1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fec152daa1d7a6ac114b7053c8b2f62144e74d5c0d0f6a12c066df55fcc663e-merged.mount: Deactivated successfully.
Dec 01 20:33:05 compute-0 podman[94734]: 2025-12-01 20:33:05.762590444 +0000 UTC m=+0.210079235 container remove 1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:33:05 compute-0 systemd[1]: libpod-conmon-1c210bec56f86eb4be9ae80243554b9259c7ffca2d683774d0fb75334f716f7d.scope: Deactivated successfully.
Dec 01 20:33:05 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:05 compute-0 zen_bartik[94722]: 
Dec 01 20:33:05 compute-0 zen_bartik[94722]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 01 20:33:05 compute-0 podman[94793]: 2025-12-01 20:33:05.990011785 +0000 UTC m=+0.052249256 container create 21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 20:33:06 compute-0 systemd[1]: libpod-0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46.scope: Deactivated successfully.
Dec 01 20:33:06 compute-0 podman[94705]: 2025-12-01 20:33:06.000322959 +0000 UTC m=+0.577037089 container died 0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46 (image=quay.io/ceph/ceph:v20, name=zen_bartik, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:33:06 compute-0 systemd[1]: Started libpod-conmon-21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e.scope.
Dec 01 20:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2bc8ca0481b2f3e0472790cf892cc34e470cc986acba4dd72a67883146cf09-merged.mount: Deactivated successfully.
Dec 01 20:33:06 compute-0 podman[94705]: 2025-12-01 20:33:06.054405282 +0000 UTC m=+0.631119392 container remove 0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46 (image=quay.io/ceph/ceph:v20, name=zen_bartik, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 20:33:06 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:06 compute-0 systemd[1]: libpod-conmon-0fe69b28811529b272d0ef08e9be9cacb56c6f9453799d8b3e0370d3b6dc7f46.scope: Deactivated successfully.
Dec 01 20:33:06 compute-0 podman[94793]: 2025-12-01 20:33:05.970396117 +0000 UTC m=+0.032633598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773233e1c3c64caa7eddf71ff939031b49c0b100e64c41e4aceecd896ae9596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773233e1c3c64caa7eddf71ff939031b49c0b100e64c41e4aceecd896ae9596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773233e1c3c64caa7eddf71ff939031b49c0b100e64c41e4aceecd896ae9596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773233e1c3c64caa7eddf71ff939031b49c0b100e64c41e4aceecd896ae9596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773233e1c3c64caa7eddf71ff939031b49c0b100e64c41e4aceecd896ae9596/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:06 compute-0 ceph-mon[75880]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:06 compute-0 ceph-mon[75880]: mds.? [v2:192.168.122.100:6814/1220564160,v1:192.168.122.100:6815/1220564160] up:active
Dec 01 20:33:06 compute-0 ceph-mon[75880]: fsmap cephfs:1 {0=cephfs.compute-0.pstuwl=up:active}
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:33:06 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:06 compute-0 sudo[94677]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:06 compute-0 podman[94793]: 2025-12-01 20:33:06.092709628 +0000 UTC m=+0.154947109 container init 21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lederberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:33:06 compute-0 podman[94793]: 2025-12-01 20:33:06.104655644 +0000 UTC m=+0.166893115 container start 21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 20:33:06 compute-0 podman[94793]: 2025-12-01 20:33:06.108688492 +0000 UTC m=+0.170926053 container attach 21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:33:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:06 compute-0 jovial_lederberg[94825]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:33:06 compute-0 jovial_lederberg[94825]: --> All data devices are unavailable
Dec 01 20:33:06 compute-0 systemd[1]: libpod-21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e.scope: Deactivated successfully.
Dec 01 20:33:06 compute-0 podman[94851]: 2025-12-01 20:33:06.768151255 +0000 UTC m=+0.029573552 container died 21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lederberg, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:33:06 compute-0 sudo[94878]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khnasoywufgihfaqnvgllfwzndymkmjy ; /usr/bin/python3'
Dec 01 20:33:06 compute-0 sudo[94878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c773233e1c3c64caa7eddf71ff939031b49c0b100e64c41e4aceecd896ae9596-merged.mount: Deactivated successfully.
Dec 01 20:33:06 compute-0 podman[94851]: 2025-12-01 20:33:06.818551492 +0000 UTC m=+0.079973789 container remove 21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:06 compute-0 systemd[1]: libpod-conmon-21e9664afb89f64594d5eef6de9b38c7a6a0fa9c259fa1e7fe3e96e5385d217e.scope: Deactivated successfully.
Dec 01 20:33:06 compute-0 sudo[94678]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:06 compute-0 sudo[94885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:06 compute-0 sudo[94885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:06 compute-0 sudo[94885]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:06 compute-0 python3[94884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:07 compute-0 sudo[94910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:33:07 compute-0 sudo[94910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:07 compute-0 podman[94911]: 2025-12-01 20:33:07.065391574 +0000 UTC m=+0.072264766 container create 70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49 (image=quay.io/ceph/ceph:v20, name=brave_mclaren, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:33:07 compute-0 ceph-mon[75880]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:07 compute-0 systemd[1]: Started libpod-conmon-70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49.scope.
Dec 01 20:33:07 compute-0 podman[94911]: 2025-12-01 20:33:07.038317622 +0000 UTC m=+0.045190864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:07 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54b5637f8fc1ea4663c500859fc6a0801ea5fc13b2aa2d7e00dc309649c1b06/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54b5637f8fc1ea4663c500859fc6a0801ea5fc13b2aa2d7e00dc309649c1b06/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:07 compute-0 podman[94911]: 2025-12-01 20:33:07.163069859 +0000 UTC m=+0.169943141 container init 70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49 (image=quay.io/ceph/ceph:v20, name=brave_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:07 compute-0 podman[94911]: 2025-12-01 20:33:07.175853961 +0000 UTC m=+0.182727193 container start 70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49 (image=quay.io/ceph/ceph:v20, name=brave_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:07 compute-0 podman[94911]: 2025-12-01 20:33:07.193248509 +0000 UTC m=+0.200121751 container attach 70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49 (image=quay.io/ceph/ceph:v20, name=brave_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:07 compute-0 podman[94968]: 2025-12-01 20:33:07.399036579 +0000 UTC m=+0.063233682 container create fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ganguly, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:07 compute-0 systemd[1]: Started libpod-conmon-fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413.scope.
Dec 01 20:33:07 compute-0 podman[94968]: 2025-12-01 20:33:07.371937185 +0000 UTC m=+0.036134338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:07 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:07 compute-0 podman[94968]: 2025-12-01 20:33:07.504451868 +0000 UTC m=+0.168649021 container init fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:33:07 compute-0 podman[94968]: 2025-12-01 20:33:07.512076198 +0000 UTC m=+0.176273311 container start fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:07 compute-0 agitated_ganguly[95002]: 167 167
Dec 01 20:33:07 compute-0 systemd[1]: libpod-fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413.scope: Deactivated successfully.
Dec 01 20:33:07 compute-0 podman[94968]: 2025-12-01 20:33:07.517570331 +0000 UTC m=+0.181767444 container attach fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:33:07 compute-0 podman[94968]: 2025-12-01 20:33:07.518127548 +0000 UTC m=+0.182324651 container died fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6487d39bfdaf39ba5031e24438793da1a48479da382e98e8ad3d2af3a09fdb7e-merged.mount: Deactivated successfully.
Dec 01 20:33:07 compute-0 podman[94968]: 2025-12-01 20:33:07.559328495 +0000 UTC m=+0.223525578 container remove fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:33:07 compute-0 systemd[1]: libpod-conmon-fecf5823627eb3ee847cc3d329fad73e500b78f34c4a12ef69a33311b34d9413.scope: Deactivated successfully.
Dec 01 20:33:07 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:07 compute-0 brave_mclaren[94950]: 
Dec 01 20:33:07 compute-0 brave_mclaren[94950]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Dec 01 20:33:07 compute-0 systemd[1]: libpod-70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49.scope: Deactivated successfully.
Dec 01 20:33:07 compute-0 podman[94911]: 2025-12-01 20:33:07.641968848 +0000 UTC m=+0.648842090 container died 70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49 (image=quay.io/ceph/ceph:v20, name=brave_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 01 20:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d54b5637f8fc1ea4663c500859fc6a0801ea5fc13b2aa2d7e00dc309649c1b06-merged.mount: Deactivated successfully.
Dec 01 20:33:07 compute-0 podman[94911]: 2025-12-01 20:33:07.697994302 +0000 UTC m=+0.704867504 container remove 70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49 (image=quay.io/ceph/ceph:v20, name=brave_mclaren, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:33:07 compute-0 systemd[1]: libpod-conmon-70fb68310b135f368da157f59db33415d264099c675e26072eb6cfc412212a49.scope: Deactivated successfully.
Dec 01 20:33:07 compute-0 sudo[94878]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:07 compute-0 podman[95040]: 2025-12-01 20:33:07.798243448 +0000 UTC m=+0.052899616 container create 975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dirac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:07 compute-0 systemd[1]: Started libpod-conmon-975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315.scope.
Dec 01 20:33:07 compute-0 podman[95040]: 2025-12-01 20:33:07.769856514 +0000 UTC m=+0.024512722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:07 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6746a67134882c9e07fe79d26183af36eda8e91c275f06e80a616a9ba9e16a82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6746a67134882c9e07fe79d26183af36eda8e91c275f06e80a616a9ba9e16a82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6746a67134882c9e07fe79d26183af36eda8e91c275f06e80a616a9ba9e16a82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6746a67134882c9e07fe79d26183af36eda8e91c275f06e80a616a9ba9e16a82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:07 compute-0 podman[95040]: 2025-12-01 20:33:07.888312914 +0000 UTC m=+0.142969072 container init 975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:33:07 compute-0 podman[95040]: 2025-12-01 20:33:07.904224205 +0000 UTC m=+0.158880363 container start 975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:07 compute-0 podman[95040]: 2025-12-01 20:33:07.908047255 +0000 UTC m=+0.162703463 container attach 975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 01 20:33:08 compute-0 ceph-mon[75880]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:08 compute-0 ansible-async_wrapper.py[94139]: Done in kid B.
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]: {
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:     "0": [
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:         {
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "devices": [
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "/dev/loop3"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             ],
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_name": "ceph_lv0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_size": "21470642176",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "name": "ceph_lv0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "tags": {
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cluster_name": "ceph",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.crush_device_class": "",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.encrypted": "0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.objectstore": "bluestore",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osd_id": "0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.type": "block",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.vdo": "0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.with_tpm": "0"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             },
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "type": "block",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "vg_name": "ceph_vg0"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:         }
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:     ],
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:     "1": [
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:         {
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "devices": [
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "/dev/loop4"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             ],
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_name": "ceph_lv1",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_size": "21470642176",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "name": "ceph_lv1",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "tags": {
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cluster_name": "ceph",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.crush_device_class": "",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.encrypted": "0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.objectstore": "bluestore",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osd_id": "1",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.type": "block",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.vdo": "0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.with_tpm": "0"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             },
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "type": "block",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "vg_name": "ceph_vg1"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:         }
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:     ],
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:     "2": [
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:         {
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "devices": [
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "/dev/loop5"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             ],
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_name": "ceph_lv2",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_size": "21470642176",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "name": "ceph_lv2",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "tags": {
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.cluster_name": "ceph",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.crush_device_class": "",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.encrypted": "0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.objectstore": "bluestore",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osd_id": "2",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.type": "block",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.vdo": "0",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:                 "ceph.with_tpm": "0"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             },
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "type": "block",
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:             "vg_name": "ceph_vg2"
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:         }
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]:     ]
Dec 01 20:33:08 compute-0 eloquent_dirac[95057]: }
Dec 01 20:33:08 compute-0 ceph-mgr[76174]: [progress INFO root] Writing back 4 completed events
Dec 01 20:33:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 01 20:33:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:08 compute-0 systemd[1]: libpod-975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315.scope: Deactivated successfully.
Dec 01 20:33:08 compute-0 podman[95040]: 2025-12-01 20:33:08.273632056 +0000 UTC m=+0.528288214 container died 975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6746a67134882c9e07fe79d26183af36eda8e91c275f06e80a616a9ba9e16a82-merged.mount: Deactivated successfully.
Dec 01 20:33:08 compute-0 podman[95040]: 2025-12-01 20:33:08.334568164 +0000 UTC m=+0.589224332 container remove 975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dirac, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:08 compute-0 systemd[1]: libpod-conmon-975dbd1cb46d73f1a421786b600cf4d3e47258f5f55712d9488bbb666ae12315.scope: Deactivated successfully.
Dec 01 20:33:08 compute-0 sudo[94910]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:08 compute-0 sudo[95078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:08 compute-0 sudo[95078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:08 compute-0 sudo[95078]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:08 compute-0 sudo[95140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylssgwhljuxyumbsmnpwapnlidshktjh ; /usr/bin/python3'
Dec 01 20:33:08 compute-0 sudo[95140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:08 compute-0 sudo[95110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:33:08 compute-0 sudo[95110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:08 compute-0 python3[95151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:08 compute-0 podman[95154]: 2025-12-01 20:33:08.801113744 +0000 UTC m=+0.054809167 container create cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9 (image=quay.io/ceph/ceph:v20, name=nifty_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:33:08 compute-0 systemd[1]: Started libpod-conmon-cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9.scope.
Dec 01 20:33:08 compute-0 podman[95154]: 2025-12-01 20:33:08.771223924 +0000 UTC m=+0.024919407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:08 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6980015419326938c074849d37f922647f14b2fe7160b297a9cdc817ffdd6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6980015419326938c074849d37f922647f14b2fe7160b297a9cdc817ffdd6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:08 compute-0 podman[95181]: 2025-12-01 20:33:08.905347626 +0000 UTC m=+0.048600801 container create 70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:08 compute-0 podman[95154]: 2025-12-01 20:33:08.916717804 +0000 UTC m=+0.170413207 container init cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9 (image=quay.io/ceph/ceph:v20, name=nifty_chaum, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:08 compute-0 podman[95154]: 2025-12-01 20:33:08.927318798 +0000 UTC m=+0.181014181 container start cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9 (image=quay.io/ceph/ceph:v20, name=nifty_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:08 compute-0 podman[95154]: 2025-12-01 20:33:08.930321203 +0000 UTC m=+0.184016586 container attach cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9 (image=quay.io/ceph/ceph:v20, name=nifty_chaum, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:33:08 compute-0 systemd[1]: Started libpod-conmon-70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d.scope.
Dec 01 20:33:08 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:08 compute-0 podman[95181]: 2025-12-01 20:33:08.883741065 +0000 UTC m=+0.026994290 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:08 compute-0 podman[95181]: 2025-12-01 20:33:08.98360187 +0000 UTC m=+0.126855065 container init 70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:33:08 compute-0 podman[95181]: 2025-12-01 20:33:08.990748725 +0000 UTC m=+0.134001910 container start 70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:33:08 compute-0 intelligent_tharp[95201]: 167 167
Dec 01 20:33:08 compute-0 podman[95181]: 2025-12-01 20:33:08.99441185 +0000 UTC m=+0.137665075 container attach 70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:33:08 compute-0 systemd[1]: libpod-70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d.scope: Deactivated successfully.
Dec 01 20:33:08 compute-0 podman[95181]: 2025-12-01 20:33:08.995475863 +0000 UTC m=+0.138729038 container died 70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_tharp, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 01 20:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-56193aeae168d6211d5e69a39b2e5023f09be75ca463554dc6869ace4b0cfac0-merged.mount: Deactivated successfully.
Dec 01 20:33:09 compute-0 podman[95181]: 2025-12-01 20:33:09.02936469 +0000 UTC m=+0.172617875 container remove 70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec 01 20:33:09 compute-0 systemd[1]: libpod-conmon-70186a96cfef3a4d7ded5c5e787d9cb9d52827df0814f2eb1e980fa71405bd3d.scope: Deactivated successfully.
Dec 01 20:33:09 compute-0 ceph-mds[94156]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 01 20:33:09 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mds-cephfs-compute-0-pstuwl[94152]: 2025-12-01T20:33:09.065+0000 7f42b2fb0640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 01 20:33:09 compute-0 ceph-mon[75880]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:09 compute-0 podman[95244]: 2025-12-01 20:33:09.201002735 +0000 UTC m=+0.058119991 container create 4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 01 20:33:09 compute-0 systemd[1]: Started libpod-conmon-4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163.scope.
Dec 01 20:33:09 compute-0 podman[95244]: 2025-12-01 20:33:09.180137257 +0000 UTC m=+0.037254513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:09 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69fd9137a4a5da561fe8c142b30c6198c23030c46474aa7e69f9e1068a331a62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69fd9137a4a5da561fe8c142b30c6198c23030c46474aa7e69f9e1068a331a62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69fd9137a4a5da561fe8c142b30c6198c23030c46474aa7e69f9e1068a331a62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69fd9137a4a5da561fe8c142b30c6198c23030c46474aa7e69f9e1068a331a62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:09 compute-0 podman[95244]: 2025-12-01 20:33:09.301299043 +0000 UTC m=+0.158416299 container init 4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:09 compute-0 podman[95244]: 2025-12-01 20:33:09.312287259 +0000 UTC m=+0.169404525 container start 4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:33:09 compute-0 podman[95244]: 2025-12-01 20:33:09.316742499 +0000 UTC m=+0.173859765 container attach 4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:33:09 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:09 compute-0 nifty_chaum[95183]: 
Dec 01 20:33:09 compute-0 nifty_chaum[95183]: [{"container_id": "83fed8c2b0dc", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.25%", "created": "2025-12-01T20:31:54.011557Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-01T20:31:54.065993Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T20:33:05.087239Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-01T20:31:53.921734Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@crash.compute-0", "version": "20.2.0"}, {"container_id": "680fbbfc21d3", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.15%", "created": "2025-12-01T20:33:03.329795Z", "daemon_id": "cephfs.compute-0.pstuwl", "daemon_name": "mds.cephfs.compute-0.pstuwl", "daemon_type": "mds", "events": ["2025-12-01T20:33:03.386051Z daemon:mds.cephfs.compute-0.pstuwl [INFO] \"Deployed mds.cephfs.compute-0.pstuwl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T20:33:05.087654Z", "memory_usage": 18287165, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-01T20:33:03.187484Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@mds.cephfs.compute-0.pstuwl", "version": "20.2.0"}, {"container_id": "c450ae80f1c1", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "18.35%", "created": "2025-12-01T20:31:16.384559Z", "daemon_id": "compute-0.xhvuzu", "daemon_name": "mgr.compute-0.xhvuzu", "daemon_type": "mgr", "events": ["2025-12-01T20:31:58.143844Z daemon:mgr.compute-0.xhvuzu [INFO] \"Reconfigured mgr.compute-0.xhvuzu on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T20:33:05.087063Z", "memory_usage": 545783808, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-01T20:31:16.124889Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@mgr.compute-0.xhvuzu", "version": "20.2.0"}, {"container_id": "4df6a7b208c7", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.70%", "created": "2025-12-01T20:31:12.051422Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-01T20:31:57.508107Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T20:33:05.086924Z", "memory_request": 2147483648, "memory_usage": 41670410, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-01T20:31:14.164988Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@mon.compute-0", "version": "20.2.0"}, {"container_id": "6923995d2d72", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.85%", "created": "2025-12-01T20:32:16.245300Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-01T20:32:16.326832Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T20:33:05.087348Z", "memory_request": 4294967296, "memory_usage": 56937676, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-01T20:32:16.130728Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@osd.0", "version": "20.2.0"}, {"container_id": "b356bc457bab", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.03%", "created": "2025-12-01T20:32:20.692440Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-01T20:32:20.799689Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T20:33:05.087450Z", "memory_request": 4294967296, "memory_usage": 60272148, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-01T20:32:20.511830Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@osd.1", "version": "20.2.0"}, {"container_id": "ea37d4c8b3d8", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.35%", "created": "2025-12-01T20:32:25.809601Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-01T20:32:25.921043Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-01T20:33:05.087551Z", "memory_request": 4294967296, "memory_usage": 57409536, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-01T20:32:25.636598Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b@osd.2", "version": "20.2.0"}]
Dec 01 20:33:09 compute-0 systemd[1]: libpod-cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9.scope: Deactivated successfully.
Dec 01 20:33:09 compute-0 podman[95154]: 2025-12-01 20:33:09.357782991 +0000 UTC m=+0.611478374 container died cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9 (image=quay.io/ceph/ceph:v20, name=nifty_chaum, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6980015419326938c074849d37f922647f14b2fe7160b297a9cdc817ffdd6d-merged.mount: Deactivated successfully.
Dec 01 20:33:09 compute-0 podman[95154]: 2025-12-01 20:33:09.392538456 +0000 UTC m=+0.646233839 container remove cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9 (image=quay.io/ceph/ceph:v20, name=nifty_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:09 compute-0 systemd[1]: libpod-conmon-cdd511f14ba628d196a36e048f79154658bb4a90258a2025ced0f455417c21b9.scope: Deactivated successfully.
Dec 01 20:33:09 compute-0 sudo[95140]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:09 compute-0 lvm[95353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:33:09 compute-0 lvm[95354]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:33:09 compute-0 lvm[95353]: VG ceph_vg0 finished
Dec 01 20:33:09 compute-0 lvm[95354]: VG ceph_vg1 finished
Dec 01 20:33:09 compute-0 lvm[95356]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:33:09 compute-0 lvm[95356]: VG ceph_vg2 finished
Dec 01 20:33:10 compute-0 lvm[95357]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:33:10 compute-0 lvm[95357]: VG ceph_vg0 finished
Dec 01 20:33:10 compute-0 quirky_wing[95261]: {}
Dec 01 20:33:10 compute-0 ceph-mon[75880]: pgmap v72: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:10 compute-0 systemd[1]: libpod-4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163.scope: Deactivated successfully.
Dec 01 20:33:10 compute-0 systemd[1]: libpod-4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163.scope: Consumed 1.339s CPU time.
Dec 01 20:33:10 compute-0 podman[95244]: 2025-12-01 20:33:10.145617217 +0000 UTC m=+1.002734453 container died 4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-69fd9137a4a5da561fe8c142b30c6198c23030c46474aa7e69f9e1068a331a62-merged.mount: Deactivated successfully.
Dec 01 20:33:10 compute-0 podman[95244]: 2025-12-01 20:33:10.193762352 +0000 UTC m=+1.050879628 container remove 4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:33:10 compute-0 systemd[1]: libpod-conmon-4e1d16ea149c6f148f0d93b7af42adadb0036e313175adacefa02fe5e6776163.scope: Deactivated successfully.
Dec 01 20:33:10 compute-0 sudo[95395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxzlxotemnbkarvgazsamsgzynyujis ; /usr/bin/python3'
Dec 01 20:33:10 compute-0 sudo[95395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:10 compute-0 sudo[95110]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:33:10 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:33:10 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:10 compute-0 sudo[95398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:33:10 compute-0 sudo[95398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:10 compute-0 sudo[95398]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:10 compute-0 python3[95397]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:10 compute-0 sudo[95423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:10 compute-0 sudo[95423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:10 compute-0 sudo[95423]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:10 compute-0 sudo[95454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:33:10 compute-0 sudo[95454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:10 compute-0 podman[95446]: 2025-12-01 20:33:10.434864704 +0000 UTC m=+0.053933579 container create 4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28 (image=quay.io/ceph/ceph:v20, name=zen_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:33:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:10 compute-0 systemd[1]: Started libpod-conmon-4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28.scope.
Dec 01 20:33:10 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c1dea421f391d38cf8e0e56ed43c724932efb69dc0183c22c3db8f7dff74f4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3c1dea421f391d38cf8e0e56ed43c724932efb69dc0183c22c3db8f7dff74f4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:10 compute-0 podman[95446]: 2025-12-01 20:33:10.412028394 +0000 UTC m=+0.031097319 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:10 compute-0 podman[95446]: 2025-12-01 20:33:10.506590813 +0000 UTC m=+0.125659698 container init 4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28 (image=quay.io/ceph/ceph:v20, name=zen_dijkstra, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:10 compute-0 podman[95446]: 2025-12-01 20:33:10.513905733 +0000 UTC m=+0.132974598 container start 4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28 (image=quay.io/ceph/ceph:v20, name=zen_dijkstra, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:10 compute-0 podman[95446]: 2025-12-01 20:33:10.517674662 +0000 UTC m=+0.136743577 container attach 4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28 (image=quay.io/ceph/ceph:v20, name=zen_dijkstra, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:33:10 compute-0 podman[95557]: 2025-12-01 20:33:10.891614075 +0000 UTC m=+0.079105042 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 01 20:33:10 compute-0 podman[95557]: 2025-12-01 20:33:10.98258428 +0000 UTC m=+0.170075247 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 01 20:33:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 01 20:33:10 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3652190137' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:33:11 compute-0 zen_dijkstra[95489]: 
Dec 01 20:33:11 compute-0 zen_dijkstra[95489]: {"fsid":"dcf60a89-bba0-58b0-a1bf-d4bde723201b","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":116,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1764621152,"num_in_osds":3,"osd_in_since":1764621128,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":24,"data_bytes":461710,"bytes_used":83939328,"bytes_avail":64327987200,"bytes_total":64411926528,"write_bytes_sec":1194,"read_op_per_sec":0,"write_op_per_sec":3},"fsmap":{"epoch":5,"btime":"2025-12-01T20:33:05:056385+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.pstuwl","status":"up:active","gid":14242}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-01T20:32:34.431656+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 01 20:33:11 compute-0 systemd[1]: libpod-4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28.scope: Deactivated successfully.
Dec 01 20:33:11 compute-0 podman[95446]: 2025-12-01 20:33:11.020301127 +0000 UTC m=+0.639370002 container died 4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28 (image=quay.io/ceph/ceph:v20, name=zen_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c1dea421f391d38cf8e0e56ed43c724932efb69dc0183c22c3db8f7dff74f4-merged.mount: Deactivated successfully.
Dec 01 20:33:11 compute-0 podman[95446]: 2025-12-01 20:33:11.069731764 +0000 UTC m=+0.688800629 container remove 4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28 (image=quay.io/ceph/ceph:v20, name=zen_dijkstra, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 01 20:33:11 compute-0 systemd[1]: libpod-conmon-4d35a5df91de9c50aff4faea6ca1ef53fc68f6020aeb2f7f4c3b346c682cce28.scope: Deactivated successfully.
Dec 01 20:33:11 compute-0 sudo[95395]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:11 compute-0 ceph-mon[75880]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 01 20:33:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:11 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3652190137' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 01 20:33:11 compute-0 sudo[95454]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:33:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:33:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:11 compute-0 sudo[95738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:11 compute-0 sudo[95738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:11 compute-0 sudo[95738]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:11 compute-0 sudo[95793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueamhpxwksyzuwijemssezpfznepgrux ; /usr/bin/python3'
Dec 01 20:33:11 compute-0 sudo[95793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:11 compute-0 sudo[95776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:33:11 compute-0 sudo[95776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:12 compute-0 python3[95808]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:12 compute-0 ceph-mon[75880]: pgmap v73: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:33:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:33:12 compute-0 podman[95814]: 2025-12-01 20:33:12.122352116 +0000 UTC m=+0.064083640 container create 2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff (image=quay.io/ceph/ceph:v20, name=laughing_rubin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 20:33:12 compute-0 systemd[1]: Started libpod-conmon-2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff.scope.
Dec 01 20:33:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:12 compute-0 podman[95814]: 2025-12-01 20:33:12.097379429 +0000 UTC m=+0.039111033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a38ac538aad68a5be41a1002f3ac09c4cbdb6ee56b3eaf0294a57c42a445cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03a38ac538aad68a5be41a1002f3ac09c4cbdb6ee56b3eaf0294a57c42a445cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:12 compute-0 podman[95814]: 2025-12-01 20:33:12.208623072 +0000 UTC m=+0.150354596 container init 2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff (image=quay.io/ceph/ceph:v20, name=laughing_rubin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 01 20:33:12 compute-0 podman[95814]: 2025-12-01 20:33:12.220384722 +0000 UTC m=+0.162116236 container start 2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff (image=quay.io/ceph/ceph:v20, name=laughing_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 20:33:12 compute-0 podman[95814]: 2025-12-01 20:33:12.223521651 +0000 UTC m=+0.165253165 container attach 2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff (image=quay.io/ceph/ceph:v20, name=laughing_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 01 20:33:12 compute-0 podman[95843]: 2025-12-01 20:33:12.252462592 +0000 UTC m=+0.046862996 container create 8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 01 20:33:12 compute-0 systemd[1]: Started libpod-conmon-8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627.scope.
Dec 01 20:33:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:12 compute-0 podman[95843]: 2025-12-01 20:33:12.232793923 +0000 UTC m=+0.027194317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:12 compute-0 podman[95843]: 2025-12-01 20:33:12.330864271 +0000 UTC m=+0.125264635 container init 8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_chaplygin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 01 20:33:12 compute-0 podman[95843]: 2025-12-01 20:33:12.337835491 +0000 UTC m=+0.132235835 container start 8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:33:12 compute-0 wizardly_chaplygin[95861]: 167 167
Dec 01 20:33:12 compute-0 podman[95843]: 2025-12-01 20:33:12.340907116 +0000 UTC m=+0.135307481 container attach 8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_chaplygin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:33:12 compute-0 systemd[1]: libpod-8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627.scope: Deactivated successfully.
Dec 01 20:33:12 compute-0 podman[95885]: 2025-12-01 20:33:12.382963781 +0000 UTC m=+0.027237489 container died 8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6e6eb64f04f3ba0cfaf8ebc77ba4e6171caf440b1c12d95525d2021b304e4d9-merged.mount: Deactivated successfully.
Dec 01 20:33:12 compute-0 podman[95885]: 2025-12-01 20:33:12.417752156 +0000 UTC m=+0.062025834 container remove 8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:12 compute-0 systemd[1]: libpod-conmon-8a8a192b325f6fe8f0764daec96f6d4d6bdd50457a144cdf1accb6428ce81627.scope: Deactivated successfully.
Dec 01 20:33:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:12 compute-0 podman[95907]: 2025-12-01 20:33:12.6096999 +0000 UTC m=+0.071629486 container create ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 01 20:33:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 01 20:33:12 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/678938888' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:33:12 compute-0 laughing_rubin[95837]: 
Dec 01 20:33:12 compute-0 systemd[1]: libpod-2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff.scope: Deactivated successfully.
Dec 01 20:33:12 compute-0 laughing_rubin[95837]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""}]
Dec 01 20:33:12 compute-0 podman[95814]: 2025-12-01 20:33:12.649565355 +0000 UTC m=+0.591296909 container died 2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff (image=quay.io/ceph/ceph:v20, name=laughing_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:33:12 compute-0 systemd[1]: Started libpod-conmon-ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1.scope.
Dec 01 20:33:12 compute-0 podman[95907]: 2025-12-01 20:33:12.577493556 +0000 UTC m=+0.039423202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-03a38ac538aad68a5be41a1002f3ac09c4cbdb6ee56b3eaf0294a57c42a445cb-merged.mount: Deactivated successfully.
Dec 01 20:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e700c4c9e2097b47f6535aa53265ee847df5eebd76c187a209ba40efce6fb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e700c4c9e2097b47f6535aa53265ee847df5eebd76c187a209ba40efce6fb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e700c4c9e2097b47f6535aa53265ee847df5eebd76c187a209ba40efce6fb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e700c4c9e2097b47f6535aa53265ee847df5eebd76c187a209ba40efce6fb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e700c4c9e2097b47f6535aa53265ee847df5eebd76c187a209ba40efce6fb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:12 compute-0 podman[95814]: 2025-12-01 20:33:12.696935136 +0000 UTC m=+0.638666650 container remove 2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff (image=quay.io/ceph/ceph:v20, name=laughing_rubin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 01 20:33:12 compute-0 podman[95907]: 2025-12-01 20:33:12.704082562 +0000 UTC m=+0.166012168 container init ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:12 compute-0 systemd[1]: libpod-conmon-2c78f4fceabe2621ec0bdf7e5401672cf9d315371b42df13da991fe04d1630ff.scope: Deactivated successfully.
Dec 01 20:33:12 compute-0 podman[95907]: 2025-12-01 20:33:12.716343417 +0000 UTC m=+0.178273003 container start ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:33:12 compute-0 podman[95907]: 2025-12-01 20:33:12.720363385 +0000 UTC m=+0.182292961 container attach ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:12 compute-0 sudo[95793]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:13 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/678938888' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 01 20:33:13 compute-0 compassionate_aryabhata[95928]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:33:13 compute-0 compassionate_aryabhata[95928]: --> All data devices are unavailable
Dec 01 20:33:13 compute-0 systemd[1]: libpod-ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1.scope: Deactivated successfully.
Dec 01 20:33:13 compute-0 podman[95907]: 2025-12-01 20:33:13.291102654 +0000 UTC m=+0.753032280 container died ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 01 20:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e700c4c9e2097b47f6535aa53265ee847df5eebd76c187a209ba40efce6fb4-merged.mount: Deactivated successfully.
Dec 01 20:33:13 compute-0 podman[95907]: 2025-12-01 20:33:13.353057865 +0000 UTC m=+0.814987431 container remove ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:33:13 compute-0 systemd[1]: libpod-conmon-ca032cec09db12011257d8bdcc584209874e406b6a1f21c4e4938bc70aa139b1.scope: Deactivated successfully.
Dec 01 20:33:13 compute-0 sudo[95776]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:13 compute-0 sudo[95968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:13 compute-0 sudo[95968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:13 compute-0 sudo[95968]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:13 compute-0 sudo[95993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:33:13 compute-0 sudo[95993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:13 compute-0 sudo[96039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwvqfjtkjzmwhwuaotjurpqrafanrij ; /usr/bin/python3'
Dec 01 20:33:13 compute-0 sudo[96039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:13 compute-0 python3[96043]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:13 compute-0 podman[96044]: 2025-12-01 20:33:13.732889824 +0000 UTC m=+0.038630907 container create b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0 (image=quay.io/ceph/ceph:v20, name=beautiful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:33:13 compute-0 systemd[1]: Started libpod-conmon-b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0.scope.
Dec 01 20:33:13 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3da09ffcfe336e0b70e60880cd1354547ad7417a7b8ab562f75042125324b33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3da09ffcfe336e0b70e60880cd1354547ad7417a7b8ab562f75042125324b33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:13 compute-0 podman[96044]: 2025-12-01 20:33:13.715789306 +0000 UTC m=+0.021530419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:13 compute-0 podman[96044]: 2025-12-01 20:33:13.830455106 +0000 UTC m=+0.136196209 container init b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0 (image=quay.io/ceph/ceph:v20, name=beautiful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:13 compute-0 podman[96044]: 2025-12-01 20:33:13.837029093 +0000 UTC m=+0.142770186 container start b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0 (image=quay.io/ceph/ceph:v20, name=beautiful_fermat, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:13 compute-0 podman[96044]: 2025-12-01 20:33:13.841289288 +0000 UTC m=+0.147030401 container attach b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0 (image=quay.io/ceph/ceph:v20, name=beautiful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:33:13 compute-0 podman[96072]: 2025-12-01 20:33:13.845218851 +0000 UTC m=+0.059579397 container create 441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:33:13 compute-0 systemd[1]: Started libpod-conmon-441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d.scope.
Dec 01 20:33:13 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:13 compute-0 podman[96072]: 2025-12-01 20:33:13.823941541 +0000 UTC m=+0.038302137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:13 compute-0 podman[96072]: 2025-12-01 20:33:13.922469264 +0000 UTC m=+0.136829860 container init 441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 01 20:33:13 compute-0 podman[96072]: 2025-12-01 20:33:13.92840844 +0000 UTC m=+0.142769026 container start 441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:33:13 compute-0 podman[96072]: 2025-12-01 20:33:13.931775126 +0000 UTC m=+0.146135722 container attach 441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec 01 20:33:13 compute-0 dreamy_keller[96092]: 167 167
Dec 01 20:33:13 compute-0 systemd[1]: libpod-441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d.scope: Deactivated successfully.
Dec 01 20:33:13 compute-0 podman[96072]: 2025-12-01 20:33:13.934103109 +0000 UTC m=+0.148463695 container died 441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-220d742340992e149eaa74cbfd0abab2c0daefe8414689bcd2d5d27c69e38e72-merged.mount: Deactivated successfully.
Dec 01 20:33:13 compute-0 podman[96072]: 2025-12-01 20:33:13.978570189 +0000 UTC m=+0.192930755 container remove 441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 01 20:33:13 compute-0 systemd[1]: libpod-conmon-441b0939ac740dbd5e420c5cb0006e548cc710ece193f7ad29fbb1a86c43a01d.scope: Deactivated successfully.
Dec 01 20:33:14 compute-0 ceph-mon[75880]: pgmap v74: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:14 compute-0 podman[96135]: 2025-12-01 20:33:14.137147363 +0000 UTC m=+0.036861392 container create 7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:33:14 compute-0 systemd[1]: Started libpod-conmon-7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c.scope.
Dec 01 20:33:14 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d420083a16e3aeb0e3f847baea2f516ab1d232fbcc29968afdc1c35a9e97b53a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d420083a16e3aeb0e3f847baea2f516ab1d232fbcc29968afdc1c35a9e97b53a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d420083a16e3aeb0e3f847baea2f516ab1d232fbcc29968afdc1c35a9e97b53a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d420083a16e3aeb0e3f847baea2f516ab1d232fbcc29968afdc1c35a9e97b53a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:14 compute-0 podman[96135]: 2025-12-01 20:33:14.120050155 +0000 UTC m=+0.019764204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:14 compute-0 podman[96135]: 2025-12-01 20:33:14.218338519 +0000 UTC m=+0.118052578 container init 7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 01 20:33:14 compute-0 podman[96135]: 2025-12-01 20:33:14.235118908 +0000 UTC m=+0.134832947 container start 7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_haibt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 01 20:33:14 compute-0 podman[96135]: 2025-12-01 20:33:14.239432753 +0000 UTC m=+0.139146802 container attach 7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 01 20:33:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 01 20:33:14 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1897833177' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 01 20:33:14 compute-0 beautiful_fermat[96074]: mimic
Dec 01 20:33:14 compute-0 systemd[1]: libpod-b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0.scope: Deactivated successfully.
Dec 01 20:33:14 compute-0 podman[96044]: 2025-12-01 20:33:14.305551875 +0000 UTC m=+0.611293008 container died b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0 (image=quay.io/ceph/ceph:v20, name=beautiful_fermat, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3da09ffcfe336e0b70e60880cd1354547ad7417a7b8ab562f75042125324b33-merged.mount: Deactivated successfully.
Dec 01 20:33:14 compute-0 podman[96044]: 2025-12-01 20:33:14.359754011 +0000 UTC m=+0.665495114 container remove b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0 (image=quay.io/ceph/ceph:v20, name=beautiful_fermat, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:33:14 compute-0 systemd[1]: libpod-conmon-b5214df21c3376c84f0dac899f0c458d23b7288b74140c0e425408d68067c8e0.scope: Deactivated successfully.
Dec 01 20:33:14 compute-0 sudo[96039]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:14 compute-0 serene_haibt[96152]: {
Dec 01 20:33:14 compute-0 serene_haibt[96152]:     "0": [
Dec 01 20:33:14 compute-0 serene_haibt[96152]:         {
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "devices": [
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "/dev/loop3"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             ],
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_name": "ceph_lv0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_size": "21470642176",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "name": "ceph_lv0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "tags": {
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cluster_name": "ceph",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.crush_device_class": "",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.encrypted": "0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.objectstore": "bluestore",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osd_id": "0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.type": "block",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.vdo": "0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.with_tpm": "0"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             },
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "type": "block",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "vg_name": "ceph_vg0"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:         }
Dec 01 20:33:14 compute-0 serene_haibt[96152]:     ],
Dec 01 20:33:14 compute-0 serene_haibt[96152]:     "1": [
Dec 01 20:33:14 compute-0 serene_haibt[96152]:         {
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "devices": [
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "/dev/loop4"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             ],
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_name": "ceph_lv1",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_size": "21470642176",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "name": "ceph_lv1",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "tags": {
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cluster_name": "ceph",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.crush_device_class": "",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.encrypted": "0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.objectstore": "bluestore",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osd_id": "1",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.type": "block",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.vdo": "0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.with_tpm": "0"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             },
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "type": "block",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "vg_name": "ceph_vg1"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:         }
Dec 01 20:33:14 compute-0 serene_haibt[96152]:     ],
Dec 01 20:33:14 compute-0 serene_haibt[96152]:     "2": [
Dec 01 20:33:14 compute-0 serene_haibt[96152]:         {
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "devices": [
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "/dev/loop5"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             ],
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_name": "ceph_lv2",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_size": "21470642176",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "name": "ceph_lv2",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "tags": {
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.cluster_name": "ceph",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.crush_device_class": "",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.encrypted": "0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.objectstore": "bluestore",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osd_id": "2",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.type": "block",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.vdo": "0",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:                 "ceph.with_tpm": "0"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             },
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "type": "block",
Dec 01 20:33:14 compute-0 serene_haibt[96152]:             "vg_name": "ceph_vg2"
Dec 01 20:33:14 compute-0 serene_haibt[96152]:         }
Dec 01 20:33:14 compute-0 serene_haibt[96152]:     ]
Dec 01 20:33:14 compute-0 serene_haibt[96152]: }
Dec 01 20:33:14 compute-0 systemd[1]: libpod-7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c.scope: Deactivated successfully.
Dec 01 20:33:14 compute-0 podman[96177]: 2025-12-01 20:33:14.597524888 +0000 UTC m=+0.027654092 container died 7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_haibt, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d420083a16e3aeb0e3f847baea2f516ab1d232fbcc29968afdc1c35a9e97b53a-merged.mount: Deactivated successfully.
Dec 01 20:33:14 compute-0 podman[96177]: 2025-12-01 20:33:14.641768111 +0000 UTC m=+0.071897285 container remove 7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 01 20:33:14 compute-0 systemd[1]: libpod-conmon-7b19c7eeaf2b2310eb5c6ca2a801e42c6a3b3503b29ae874587582a7cb98548c.scope: Deactivated successfully.
Dec 01 20:33:14 compute-0 sudo[95993]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:14 compute-0 sudo[96192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:33:14 compute-0 sudo[96192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:14 compute-0 sudo[96192]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:14 compute-0 sudo[96217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:33:14 compute-0 sudo[96217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:15 compute-0 podman[96254]: 2025-12-01 20:33:15.124594093 +0000 UTC m=+0.048560070 container create 4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:33:15 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1897833177' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 01 20:33:15 compute-0 sudo[96291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxizcbxhjoxwkneolydtllpoejctcuvn ; /usr/bin/python3'
Dec 01 20:33:15 compute-0 systemd[1]: Started libpod-conmon-4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b.scope.
Dec 01 20:33:15 compute-0 sudo[96291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:15 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:15 compute-0 podman[96254]: 2025-12-01 20:33:15.104368616 +0000 UTC m=+0.028334603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:15 compute-0 podman[96254]: 2025-12-01 20:33:15.203017982 +0000 UTC m=+0.126983969 container init 4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lovelace, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:33:15 compute-0 podman[96254]: 2025-12-01 20:33:15.212624065 +0000 UTC m=+0.136590012 container start 4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:33:15 compute-0 nifty_lovelace[96295]: 167 167
Dec 01 20:33:15 compute-0 systemd[1]: libpod-4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b.scope: Deactivated successfully.
Dec 01 20:33:15 compute-0 podman[96254]: 2025-12-01 20:33:15.218800229 +0000 UTC m=+0.142766196 container attach 4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lovelace, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:33:15 compute-0 podman[96254]: 2025-12-01 20:33:15.219551463 +0000 UTC m=+0.143517430 container died 4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a4dceeced1f84f602e844c94857c37ee4c877944ed6bfccb06235d19c414efd-merged.mount: Deactivated successfully.
Dec 01 20:33:15 compute-0 podman[96254]: 2025-12-01 20:33:15.254956448 +0000 UTC m=+0.178922395 container remove 4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:33:15 compute-0 systemd[1]: libpod-conmon-4ee1fbf1990704cf84c2715351395eada25bc5524f0b725d53eefa79d862c35b.scope: Deactivated successfully.
Dec 01 20:33:15 compute-0 python3[96297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:15 compute-0 podman[96315]: 2025-12-01 20:33:15.378648492 +0000 UTC m=+0.037422399 container create 9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7 (image=quay.io/ceph/ceph:v20, name=exciting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 01 20:33:15 compute-0 systemd[1]: Started libpod-conmon-9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7.scope.
Dec 01 20:33:15 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:15 compute-0 podman[96332]: 2025-12-01 20:33:15.452327182 +0000 UTC m=+0.054818497 container create efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e021eee5485380b6d1ad772dfc78b6cac5bfe9f551075e67f95cdcad591a8a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e021eee5485380b6d1ad772dfc78b6cac5bfe9f551075e67f95cdcad591a8a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:15 compute-0 podman[96315]: 2025-12-01 20:33:15.360102278 +0000 UTC m=+0.018876205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 01 20:33:15 compute-0 podman[96315]: 2025-12-01 20:33:15.482820092 +0000 UTC m=+0.141594029 container init 9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7 (image=quay.io/ceph/ceph:v20, name=exciting_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:33:15 compute-0 podman[96315]: 2025-12-01 20:33:15.492972992 +0000 UTC m=+0.151746929 container start 9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7 (image=quay.io/ceph/ceph:v20, name=exciting_galois, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:33:15 compute-0 podman[96315]: 2025-12-01 20:33:15.50022475 +0000 UTC m=+0.158998687 container attach 9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7 (image=quay.io/ceph/ceph:v20, name=exciting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:15 compute-0 systemd[1]: Started libpod-conmon-efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f.scope.
Dec 01 20:33:15 compute-0 podman[96332]: 2025-12-01 20:33:15.422238205 +0000 UTC m=+0.024729570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:33:15 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43132f008dcf734757a39d2979c7c05716f68c866f3aa2c54e2fdebf1de8dfd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43132f008dcf734757a39d2979c7c05716f68c866f3aa2c54e2fdebf1de8dfd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43132f008dcf734757a39d2979c7c05716f68c866f3aa2c54e2fdebf1de8dfd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43132f008dcf734757a39d2979c7c05716f68c866f3aa2c54e2fdebf1de8dfd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:33:15 compute-0 podman[96332]: 2025-12-01 20:33:15.55549175 +0000 UTC m=+0.157983045 container init efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:33:15 compute-0 podman[96332]: 2025-12-01 20:33:15.561105907 +0000 UTC m=+0.163597192 container start efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:15 compute-0 podman[96332]: 2025-12-01 20:33:15.566817896 +0000 UTC m=+0.169309191 container attach efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:33:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 01 20:33:16 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125726048' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 01 20:33:16 compute-0 exciting_galois[96348]: 
Dec 01 20:33:16 compute-0 exciting_galois[96348]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":6}}
Dec 01 20:33:16 compute-0 systemd[1]: libpod-9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7.scope: Deactivated successfully.
Dec 01 20:33:16 compute-0 podman[96315]: 2025-12-01 20:33:16.044716433 +0000 UTC m=+0.703490330 container died 9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7 (image=quay.io/ceph/ceph:v20, name=exciting_galois, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e021eee5485380b6d1ad772dfc78b6cac5bfe9f551075e67f95cdcad591a8a6-merged.mount: Deactivated successfully.
Dec 01 20:33:16 compute-0 podman[96315]: 2025-12-01 20:33:16.07542401 +0000 UTC m=+0.734197907 container remove 9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7 (image=quay.io/ceph/ceph:v20, name=exciting_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:33:16 compute-0 sudo[96291]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:16 compute-0 systemd[1]: libpod-conmon-9cad772b6675cf2d50cf86a7f0018954a439d255946baf82e7a0e83c551adea7.scope: Deactivated successfully.
Dec 01 20:33:16 compute-0 ceph-mon[75880]: pgmap v75: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:16 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3125726048' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 01 20:33:16 compute-0 lvm[96466]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:33:16 compute-0 lvm[96466]: VG ceph_vg0 finished
Dec 01 20:33:16 compute-0 lvm[96469]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:33:16 compute-0 lvm[96469]: VG ceph_vg1 finished
Dec 01 20:33:16 compute-0 lvm[96471]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:33:16 compute-0 lvm[96471]: VG ceph_vg2 finished
Dec 01 20:33:16 compute-0 lvm[96472]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:33:16 compute-0 lvm[96472]: VG ceph_vg2 finished
Dec 01 20:33:16 compute-0 inspiring_perlman[96357]: {}
Dec 01 20:33:16 compute-0 lvm[96475]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:33:16 compute-0 lvm[96475]: VG ceph_vg2 finished
Dec 01 20:33:16 compute-0 systemd[1]: libpod-efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f.scope: Deactivated successfully.
Dec 01 20:33:16 compute-0 podman[96332]: 2025-12-01 20:33:16.413297059 +0000 UTC m=+1.015788324 container died efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_perlman, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:33:16 compute-0 systemd[1]: libpod-efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f.scope: Consumed 1.378s CPU time.
Dec 01 20:33:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-43132f008dcf734757a39d2979c7c05716f68c866f3aa2c54e2fdebf1de8dfd4-merged.mount: Deactivated successfully.
Dec 01 20:33:16 compute-0 podman[96332]: 2025-12-01 20:33:16.46446943 +0000 UTC m=+1.066960725 container remove efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_perlman, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:33:16 compute-0 systemd[1]: libpod-conmon-efc76ff88e6972c1011486208578c8eb68816ce9ad695b8fcd30bf375fafe64f.scope: Deactivated successfully.
Dec 01 20:33:16 compute-0 sudo[96217]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:33:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:33:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:16 compute-0 sudo[96485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:33:16 compute-0 sudo[96485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:33:16 compute-0 sudo[96485]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:17 compute-0 ceph-mon[75880]: pgmap v76: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:19 compute-0 ceph-mon[75880]: pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec 01 20:33:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:21 compute-0 ceph-mon[75880]: pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:23 compute-0 ceph-mon[75880]: pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:25 compute-0 ceph-mon[75880]: pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:27 compute-0 ceph-mon[75880]: pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:29 compute-0 ceph-mon[75880]: pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:31 compute-0 sshd-session[96510]: Accepted publickey for zuul from 192.168.122.30 port 49512 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:33:31 compute-0 systemd-logind[796]: New session 35 of user zuul.
Dec 01 20:33:31 compute-0 systemd[1]: Started Session 35 of User zuul.
Dec 01 20:33:31 compute-0 sshd-session[96510]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:33:31 compute-0 ceph-mon[75880]: pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:33:32
Dec 01 20:33:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:33:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:33:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes']
Dec 01 20:33:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:33:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:32 compute-0 python3.9[96663]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.789736366756355e-07 of space, bias 4.0, pg target 0.0008147683640107625 quantized to 16 (current 1)
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:33:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 01 20:33:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:33:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 01 20:33:33 compute-0 ceph-mon[75880]: pgmap v84: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:33 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 01 20:33:33 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 01 20:33:33 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev f4d81d0d-2009-48bc-9402-f2e012b88fb8 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 01 20:33:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 01 20:33:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 01 20:33:34 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 01 20:33:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 01 20:33:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:35 compute-0 ceph-mon[75880]: osdmap e34: 3 total, 3 up, 3 in
Dec 01 20:33:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:35 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=12.321860313s) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active pruub 80.946311951s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:35 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 01 20:33:35 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev f8edfe61-1553-4677-9e4a-e4e3c5b7a458 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 01 20:33:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 01 20:33:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:35 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=12.321860313s) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown pruub 80.946311951s@ mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:35 compute-0 sudo[96879]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbzhikgurymidwldhzirnnekbgnkfdhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621214.5707579-32-74367868927224/AnsiballZ_command.py'
Dec 01 20:33:35 compute-0 sudo[96879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:33:35 compute-0 python3.9[96881]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:33:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 01 20:33:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:36 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev 3403dbc0-c4c0-4706-a923-561b7dc3b18d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 01 20:33:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 01 20:33:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:36 compute-0 ceph-mon[75880]: pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:36 compute-0 ceph-mon[75880]: osdmap e35: 3 total, 3 up, 3 in
Dec 01 20:33:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:36 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 01 20:33:36 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 01 20:33:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v89: 38 pgs: 31 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 01 20:33:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 01 20:33:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 01 20:33:37 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:37 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:37 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 01 20:33:37 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 01 20:33:37 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev 0fe91df8-3527-466c-9d22-11f209c74e74 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 01 20:33:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec 01 20:33:37 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec 01 20:33:37 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=11.311786652s) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active pruub 87.041725159s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:37 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=11.311786652s) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown pruub 87.041725159s@ mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:37 compute-0 ceph-mon[75880]: osdmap e36: 3 total, 3 up, 3 in
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:37 compute-0 ceph-mon[75880]: 2.1f scrub starts
Dec 01 20:33:37 compute-0 ceph-mon[75880]: 2.1f scrub ok
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:37 compute-0 ceph-mon[75880]: osdmap e37: 3 total, 3 up, 3 in
Dec 01 20:33:37 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec 01 20:33:37 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=12.133437157s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 92.636787415s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:37 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=12.133437157s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 92.636787415s@ mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 01 20:33:38 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 01 20:33:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 01 20:33:38 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 01 20:33:38 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev 5b21c7fe-6596-475a-be0a-eb02361f952c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 01 20:33:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 01 20:33:38 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-mon[75880]: pgmap v89: 38 pgs: 31 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:38 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 01 20:33:38 compute-0 ceph-mon[75880]: osdmap e38: 3 total, 3 up, 3 in
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [1] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:38 compute-0 ceph-mgr[76174]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Dec 01 20:33:38 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 01 20:33:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v92: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:38 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 01 20:33:38 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 01 20:33:39 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 01 20:33:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec 01 20:33:39 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec 01 20:33:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 01 20:33:39 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 01 20:33:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 01 20:33:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 01 20:33:40 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] update: starting ev 14a3c3e9-79eb-4b41-b1e4-78c504b30720 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev f4d81d0d-2009-48bc-9402-f2e012b88fb8 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event f4d81d0d-2009-48bc-9402-f2e012b88fb8 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev f8edfe61-1553-4677-9e4a-e4e3c5b7a458 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event f8edfe61-1553-4677-9e4a-e4e3c5b7a458 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev 3403dbc0-c4c0-4706-a923-561b7dc3b18d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event 3403dbc0-c4c0-4706-a923-561b7dc3b18d (PG autoscaler increasing pool 4 PGs from 1 to 32) in 4 seconds
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev 0fe91df8-3527-466c-9d22-11f209c74e74 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event 0fe91df8-3527-466c-9d22-11f209c74e74 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 3 seconds
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev 5b21c7fe-6596-475a-be0a-eb02361f952c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event 5b21c7fe-6596-475a-be0a-eb02361f952c (PG autoscaler increasing pool 6 PGs from 1 to 16) in 2 seconds
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] complete: finished ev 14a3c3e9-79eb-4b41-b1e4-78c504b30720 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event 14a3c3e9-79eb-4b41-b1e4-78c504b30720 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 01 20:33:40 compute-0 ceph-mon[75880]: pgmap v92: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:40 compute-0 ceph-mon[75880]: 3.1f scrub starts
Dec 01 20:33:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec 01 20:33:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:40 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 01 20:33:40 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 01 20:33:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v94: 146 pgs: 1 peering, 77 unknown, 68 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 01 20:33:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:40 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 01 20:33:40 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 01 20:33:40 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 84.990226746s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:40 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:40 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 01 20:33:40 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 01 20:33:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 01 20:33:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 01 20:33:41 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-mon[75880]: 2.1d scrub starts
Dec 01 20:33:41 compute-0 ceph-mon[75880]: 2.1d scrub ok
Dec 01 20:33:41 compute-0 ceph-mon[75880]: 3.1f scrub ok
Dec 01 20:33:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 01 20:33:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 01 20:33:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:41 compute-0 ceph-mon[75880]: osdmap e39: 3 total, 3 up, 3 in
Dec 01 20:33:41 compute-0 ceph-mon[75880]: 2.b scrub starts
Dec 01 20:33:41 compute-0 ceph-mon[75880]: 2.b scrub ok
Dec 01 20:33:41 compute-0 ceph-mon[75880]: pgmap v94: 146 pgs: 1 peering, 77 unknown, 68 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:41 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 01 20:33:41 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 01 20:33:41 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 01 20:33:41 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=12.999807358s) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active pruub 93.644927979s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:41 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 33'38 active pruub 95.665847778s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:41 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 unknown pruub 95.665847778s@ mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:41 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 01 20:33:41 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=12.999807358s) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown pruub 93.644927979s@ mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 01 20:33:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 01 20:33:42 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-mon[75880]: 3.1e scrub starts
Dec 01 20:33:42 compute-0 ceph-mon[75880]: 3.1e scrub ok
Dec 01 20:33:42 compute-0 ceph-mon[75880]: 4.1f scrub starts
Dec 01 20:33:42 compute-0 ceph-mon[75880]: 4.1f scrub ok
Dec 01 20:33:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 01 20:33:42 compute-0 ceph-mon[75880]: osdmap e40: 3 total, 3 up, 3 in
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=40/41 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 1 peering, 77 unknown, 99 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 01 20:33:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 01 20:33:43 compute-0 sudo[96879]: pam_unix(sudo:session): session closed for user root
Dec 01 20:33:43 compute-0 ceph-mgr[76174]: [progress INFO root] Writing back 10 completed events
Dec 01 20:33:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 01 20:33:43 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 01 20:33:43 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 01 20:33:43 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:43 compute-0 sshd-session[96513]: Connection closed by 192.168.122.30 port 49512
Dec 01 20:33:43 compute-0 sshd-session[96510]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:33:43 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 01 20:33:43 compute-0 systemd[1]: session-35.scope: Consumed 8.286s CPU time.
Dec 01 20:33:43 compute-0 systemd-logind[796]: Session 35 logged out. Waiting for processes to exit.
Dec 01 20:33:43 compute-0 systemd-logind[796]: Removed session 35.
Dec 01 20:33:44 compute-0 ceph-mon[75880]: 3.9 scrub starts
Dec 01 20:33:44 compute-0 ceph-mon[75880]: 3.9 scrub ok
Dec 01 20:33:44 compute-0 ceph-mon[75880]: 4.8 scrub starts
Dec 01 20:33:44 compute-0 ceph-mon[75880]: 4.8 scrub ok
Dec 01 20:33:44 compute-0 ceph-mon[75880]: osdmap e41: 3 total, 3 up, 3 in
Dec 01 20:33:44 compute-0 ceph-mon[75880]: pgmap v97: 177 pgs: 1 peering, 77 unknown, 99 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:44 compute-0 ceph-mon[75880]: 2.1c scrub starts
Dec 01 20:33:44 compute-0 ceph-mon[75880]: 2.1c scrub ok
Dec 01 20:33:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 1 peering, 31 unknown, 145 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:44 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 01 20:33:44 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 01 20:33:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:45 compute-0 ceph-mon[75880]: 3.a scrub starts
Dec 01 20:33:45 compute-0 ceph-mon[75880]: 3.a scrub ok
Dec 01 20:33:45 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:45 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 01 20:33:45 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 01 20:33:46 compute-0 ceph-mon[75880]: pgmap v98: 177 pgs: 1 peering, 31 unknown, 145 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:46 compute-0 ceph-mon[75880]: 4.1c scrub starts
Dec 01 20:33:46 compute-0 ceph-mon[75880]: 4.1c scrub ok
Dec 01 20:33:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 1 peering, 31 unknown, 145 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:47 compute-0 ceph-mon[75880]: 4.1d scrub starts
Dec 01 20:33:47 compute-0 ceph-mon[75880]: 4.1d scrub ok
Dec 01 20:33:48 compute-0 ceph-mon[75880]: pgmap v99: 177 pgs: 1 peering, 31 unknown, 145 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:48 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 01 20:33:48 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 01 20:33:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 01 20:33:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 01 20:33:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 01 20:33:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 01 20:33:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 01 20:33:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 01 20:33:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 01 20:33:48 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:48 compute-0 ceph-mgr[76174]: [progress INFO root] Completed event 26fd2cbd-2661-424a-a5d9-a48325202b92 (Global Recovery Event) in 10 seconds
Dec 01 20:33:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 01 20:33:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 01 20:33:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:49 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 01 20:33:49 compute-0 ceph-mon[75880]: 2.8 scrub starts
Dec 01 20:33:49 compute-0 ceph-mon[75880]: 2.8 scrub ok
Dec 01 20:33:49 compute-0 ceph-mon[75880]: pgmap v100: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:49 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:49 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:49 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 01 20:33:49 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:49 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:49 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739074707s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581520081s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582242966s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424697876s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582140923s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424591064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582109451s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424568176s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739041328s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581520081s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582103729s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424591064s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582201958s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424697876s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582069397s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424568176s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582024574s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424728394s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581993103s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424720764s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582006454s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424728394s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739621162s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.582366943s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738656044s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581413269s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581974983s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424720764s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738636971s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581413269s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739597321s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582366943s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581933022s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424774170s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581902504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424766541s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581916809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424774170s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738620758s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581542969s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738601685s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581542969s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581887245s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581788063s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424766541s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581771851s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738469124s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581558228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738449097s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581558228s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589272499s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432479858s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738572121s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581802368s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589234352s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432495117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738533020s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581802368s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739022255s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.582336426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739005089s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582336426s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589239120s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432693481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589222908s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589182854s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588839531s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432495117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581080437s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424758911s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588819504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581054688s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424758911s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588906288s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432693481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588892937s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738539696s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.582351685s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588840485s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432685852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738516808s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582351685s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588879585s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432746887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588824272s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432685852s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588863373s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432746887s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588925362s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432853699s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588911057s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432853699s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588806152s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432777405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588847160s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432815552s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588789940s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432777405s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588831902s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432815552s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588873863s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432914734s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588836670s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432914734s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.587801933s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432479858s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730834007s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.009506226s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580264091s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858955383s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730809212s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009506226s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580239296s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858955383s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.718297958s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907165527s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.448785782s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637702942s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.718235016s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907249451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579126358s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858009338s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579100609s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579756737s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858795166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579740524s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858795166s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579789162s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858909607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579769135s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858909607s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732884407s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732856750s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012062073s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578691483s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.857963562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732902527s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012191772s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578685760s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.857986450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578671455s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857963562s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578668594s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857986450s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730150223s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.009513855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730113029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009513855s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578577042s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858009338s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570871353s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850326538s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578563690s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570859909s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732740402s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012268066s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732756615s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012290955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732745171s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012290955s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732726097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012268066s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570790291s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850372314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570778847s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850372314s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732734680s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012397766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732617378s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012283325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732597351s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012283325s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732756615s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012458801s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732712746s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012428284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732659340s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012397766s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570277214s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850196838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570260048s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732433319s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012466431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732419014s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012466431s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732224464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012458801s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732166290s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012428284s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569802284s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850204468s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569784164s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850204468s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569887161s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850326538s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569872856s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731929779s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012481689s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731917381s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012481689s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569235802s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849822998s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569217682s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849822998s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731840134s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012496948s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569419861s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850196838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569405556s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731627464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012512207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731607437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012512207s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731595993s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012496948s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568803787s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849807739s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568696976s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849746704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568675995s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849746704s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731457710s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012634277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731442451s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012634277s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715600967s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907165527s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568431854s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849739075s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568408012s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731233597s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012611389s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731206894s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012611389s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731218338s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012695312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568694115s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850189209s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731201172s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012695312s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568683624s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850189209s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731010437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012657166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730994225s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012657166s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445893288s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637702942s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445705414s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637687683s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445738792s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637771606s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445675850s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637687683s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445720673s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445558548s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637771606s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445539474s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714890480s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907226562s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714875221s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445172310s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637535095s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445319176s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637695312s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445153236s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637535095s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445299149s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637695312s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714809418s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907249451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714799881s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567954063s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849739075s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714690208s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907188416s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567939758s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730963707s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012786865s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012786865s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714676857s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907188416s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567220688s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849105835s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730884552s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012809753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444966316s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637519836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730873108s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012809753s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567152977s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849166870s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714663506s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907226562s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567141533s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849166870s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714652061s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714756012s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907356262s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444934845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637519836s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714745522s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907356262s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444891930s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637565613s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444880486s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637565613s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714426994s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907234192s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714411736s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907234192s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444160461s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637107849s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444144249s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637107849s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716694832s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.909675598s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716678619s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909675598s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443941116s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636978149s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443918228s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636978149s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716846466s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.909996033s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716833115s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909996033s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443575859s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636749268s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443558693s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716737747s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910011292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443473816s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636749268s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716727257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443461418s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443627357s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636924744s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443606377s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636924744s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716434479s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910003662s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716445923s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910011292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716422081s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910003662s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443201065s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636795044s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443183899s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636795044s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716404915s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442579269s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636421204s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442564011s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636421204s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716152191s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910018921s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716130257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910018921s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442068100s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636016846s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442056656s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636016846s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716066360s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910041809s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716053009s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910041809s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716078758s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910140991s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442327499s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636405945s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716067314s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910140991s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442316055s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636405945s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441908836s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636001587s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441880226s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636001587s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441657066s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635810852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441601753s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635780334s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441648483s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441589355s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635780334s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715913773s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910156250s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715903282s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910156250s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441512108s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635810852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441550255s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635856628s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441536903s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635856628s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441493034s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715862274s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910232544s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715847969s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910232544s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437717438s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.632148743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437705040s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.632148743s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715730667s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910224915s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715719223s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910224915s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715569496s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910163879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715547562s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910163879s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.712153435s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568786621s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849807739s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567207336s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849105835s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732886314s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012191772s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:49 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 01 20:33:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 01 20:33:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 01 20:33:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 01 20:33:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 01 20:33:50 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:50 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:50 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 01 20:33:50 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:50 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:50 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 01 20:33:50 compute-0 ceph-mon[75880]: osdmap e42: 3 total, 3 up, 3 in
Dec 01 20:33:50 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726775169s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.582183838s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726726532s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582183838s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726365089s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.582260132s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726216316s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582260132s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725442886s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.581634521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725407600s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.581634521s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721846581s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.578636169s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721824646s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.578636169s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:50 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:50 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 01 20:33:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 01 20:33:51 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 01 20:33:51 compute-0 ceph-mon[75880]: pgmap v102: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:33:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 01 20:33:51 compute-0 ceph-mon[75880]: osdmap e43: 3 total, 3 up, 3 in
Dec 01 20:33:51 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:51 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:51 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:51 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:52 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 01 20:33:52 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 01 20:33:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 134 B/s, 1 keys/s, 2 objects/s recovering
Dec 01 20:33:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 01 20:33:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 01 20:33:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 01 20:33:52 compute-0 ceph-mon[75880]: osdmap e44: 3 total, 3 up, 3 in
Dec 01 20:33:52 compute-0 ceph-mon[75880]: 5.1c scrub starts
Dec 01 20:33:52 compute-0 ceph-mon[75880]: 5.1c scrub ok
Dec 01 20:33:52 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 01 20:33:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 01 20:33:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 01 20:33:52 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 01 20:33:53 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 01 20:33:53 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 01 20:33:53 compute-0 ceph-mgr[76174]: [progress INFO root] Writing back 11 completed events
Dec 01 20:33:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 01 20:33:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:53 compute-0 ceph-mon[75880]: pgmap v105: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 134 B/s, 1 keys/s, 2 objects/s recovering
Dec 01 20:33:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 01 20:33:53 compute-0 ceph-mon[75880]: osdmap e45: 3 total, 3 up, 3 in
Dec 01 20:33:53 compute-0 ceph-mon[75880]: 2.1a scrub starts
Dec 01 20:33:53 compute-0 ceph-mon[75880]: 2.1a scrub ok
Dec 01 20:33:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965459824s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.293518066s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964842796s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.293350220s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965383530s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293518066s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964750290s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293350220s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964635849s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.293281555s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964510918s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293281555s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964970589s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.294242859s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:53 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:53 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964728355s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.294242859s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:53 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:53 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:53 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 01 20:33:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 01 20:33:54 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 01 20:33:54 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 01 20:33:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 165 B/s, 2 keys/s, 2 objects/s recovering
Dec 01 20:33:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 01 20:33:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 01 20:33:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 01 20:33:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 01 20:33:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 01 20:33:54 compute-0 ceph-mon[75880]: 7.1e scrub starts
Dec 01 20:33:54 compute-0 ceph-mon[75880]: 7.1e scrub ok
Dec 01 20:33:54 compute-0 ceph-mon[75880]: 5.1f scrub starts
Dec 01 20:33:54 compute-0 ceph-mon[75880]: 5.1f scrub ok
Dec 01 20:33:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 01 20:33:54 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667674065s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 109.582473755s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664979935s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 109.580123901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:54 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:54 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:54 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:33:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 01 20:33:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 01 20:33:55 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 01 20:33:55 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:55 compute-0 ceph-mon[75880]: pgmap v107: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 165 B/s, 2 keys/s, 2 objects/s recovering
Dec 01 20:33:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 01 20:33:55 compute-0 ceph-mon[75880]: osdmap e46: 3 total, 3 up, 3 in
Dec 01 20:33:55 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:56 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 01 20:33:56 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 01 20:33:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 166 B/s, 2 keys/s, 2 objects/s recovering
Dec 01 20:33:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 01 20:33:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 01 20:33:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 01 20:33:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 01 20:33:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 01 20:33:56 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 01 20:33:56 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934832573s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 active pruub 105.293449402s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:56 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:56 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933793068s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 active pruub 105.293289185s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:33:56 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:33:56 compute-0 ceph-mon[75880]: osdmap e47: 3 total, 3 up, 3 in
Dec 01 20:33:56 compute-0 ceph-mon[75880]: 5.10 scrub starts
Dec 01 20:33:56 compute-0 ceph-mon[75880]: 5.10 scrub ok
Dec 01 20:33:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 01 20:33:56 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:56 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:33:56 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 01 20:33:56 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 01 20:33:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 01 20:33:57 compute-0 ceph-mon[75880]: pgmap v110: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 166 B/s, 2 keys/s, 2 objects/s recovering
Dec 01 20:33:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 01 20:33:57 compute-0 ceph-mon[75880]: osdmap e48: 3 total, 3 up, 3 in
Dec 01 20:33:57 compute-0 ceph-mon[75880]: 7.1d scrub starts
Dec 01 20:33:57 compute-0 ceph-mon[75880]: 7.1d scrub ok
Dec 01 20:33:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 01 20:33:57 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 01 20:33:57 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:57 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:33:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 607 B/s, 2 keys/s, 3 objects/s recovering
Dec 01 20:33:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 01 20:33:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 01 20:33:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 01 20:33:58 compute-0 ceph-mon[75880]: osdmap e49: 3 total, 3 up, 3 in
Dec 01 20:33:58 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 01 20:33:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 01 20:33:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 01 20:33:58 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 01 20:33:58 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 01 20:33:58 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 01 20:33:59 compute-0 ceph-mon[75880]: pgmap v113: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 607 B/s, 2 keys/s, 3 objects/s recovering
Dec 01 20:33:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 01 20:33:59 compute-0 ceph-mon[75880]: osdmap e50: 3 total, 3 up, 3 in
Dec 01 20:33:59 compute-0 ceph-mon[75880]: 4.1e scrub starts
Dec 01 20:33:59 compute-0 ceph-mon[75880]: 4.1e scrub ok
Dec 01 20:33:59 compute-0 sshd-session[96939]: Accepted publickey for zuul from 192.168.122.30 port 58896 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:33:59 compute-0 systemd-logind[796]: New session 36 of user zuul.
Dec 01 20:33:59 compute-0 systemd[1]: Started Session 36 of User zuul.
Dec 01 20:33:59 compute-0 sshd-session[96939]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:34:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 504 B/s, 2 keys/s, 3 objects/s recovering
Dec 01 20:34:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 01 20:34:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 01 20:34:00 compute-0 python3.9[97092]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 01 20:34:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 01 20:34:00 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 01 20:34:00 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 01 20:34:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 01 20:34:00 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 01 20:34:01 compute-0 ceph-mon[75880]: pgmap v115: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 504 B/s, 2 keys/s, 3 objects/s recovering
Dec 01 20:34:01 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 01 20:34:01 compute-0 ceph-mon[75880]: osdmap e51: 3 total, 3 up, 3 in
Dec 01 20:34:01 compute-0 python3.9[97266]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:34:01 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 01 20:34:01 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 01 20:34:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 418 B/s, 1 keys/s, 2 objects/s recovering
Dec 01 20:34:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 01 20:34:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 01 20:34:02 compute-0 sudo[97420]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biscgamakjpnskvjllvekaiqgfdcsdte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621242.2494078-45-259130798584640/AnsiballZ_command.py'
Dec 01 20:34:02 compute-0 sudo[97420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:34:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 01 20:34:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 01 20:34:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 01 20:34:02 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 01 20:34:02 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580068588s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 active pruub 117.580322266s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:34:02 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:34:02 compute-0 ceph-mon[75880]: 3.1a scrub starts
Dec 01 20:34:02 compute-0 ceph-mon[75880]: 3.1a scrub ok
Dec 01 20:34:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 01 20:34:02 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:34:02 compute-0 python3.9[97422]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:34:02 compute-0 sudo[97420]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:34:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:34:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:34:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:34:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:34:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:34:03 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 01 20:34:03 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 01 20:34:03 compute-0 sudo[97573]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grjkulznxrbuirpgwaltwfzighdbwufn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621243.2487125-57-176229709407104/AnsiballZ_stat.py'
Dec 01 20:34:03 compute-0 sudo[97573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:34:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 01 20:34:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 01 20:34:03 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 01 20:34:03 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:34:03 compute-0 ceph-mon[75880]: pgmap v117: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 418 B/s, 1 keys/s, 2 objects/s recovering
Dec 01 20:34:03 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 01 20:34:03 compute-0 ceph-mon[75880]: osdmap e52: 3 total, 3 up, 3 in
Dec 01 20:34:03 compute-0 ceph-mon[75880]: 2.14 scrub starts
Dec 01 20:34:03 compute-0 ceph-mon[75880]: 2.14 scrub ok
Dec 01 20:34:03 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 01 20:34:03 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 01 20:34:03 compute-0 python3.9[97575]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:34:03 compute-0 sudo[97573]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:04 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 01 20:34:04 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 01 20:34:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:04 compute-0 sudo[97727]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jixztvhzajdsflnylbwktfdgpppizmhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621244.1727624-68-99176849058273/AnsiballZ_file.py'
Dec 01 20:34:04 compute-0 sudo[97727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:34:04 compute-0 ceph-mon[75880]: osdmap e53: 3 total, 3 up, 3 in
Dec 01 20:34:04 compute-0 ceph-mon[75880]: 3.19 scrub starts
Dec 01 20:34:04 compute-0 ceph-mon[75880]: 3.19 scrub ok
Dec 01 20:34:04 compute-0 ceph-mon[75880]: 2.12 scrub starts
Dec 01 20:34:04 compute-0 ceph-mon[75880]: 2.12 scrub ok
Dec 01 20:34:04 compute-0 python3.9[97729]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:34:04 compute-0 sudo[97727]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:05 compute-0 sudo[97879]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzbgcjmujrltwyndvxjazrqjdikbklip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621245.1028485-77-129468665343618/AnsiballZ_file.py'
Dec 01 20:34:05 compute-0 sudo[97879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:34:05 compute-0 python3.9[97881]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:34:05 compute-0 sudo[97879]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:05 compute-0 ceph-mon[75880]: pgmap v120: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:05 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 01 20:34:05 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 01 20:34:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v121: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:06 compute-0 python3.9[98031]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:34:06 compute-0 network[98048]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:34:06 compute-0 network[98049]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:34:06 compute-0 network[98050]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:34:06 compute-0 ceph-mon[75880]: 7.12 scrub starts
Dec 01 20:34:06 compute-0 ceph-mon[75880]: 7.12 scrub ok
Dec 01 20:34:06 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 01 20:34:06 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 01 20:34:07 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 01 20:34:07 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 01 20:34:07 compute-0 ceph-mon[75880]: pgmap v121: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:07 compute-0 ceph-mon[75880]: 3.14 scrub starts
Dec 01 20:34:07 compute-0 ceph-mon[75880]: 3.14 scrub ok
Dec 01 20:34:07 compute-0 ceph-mon[75880]: 2.10 scrub starts
Dec 01 20:34:07 compute-0 ceph-mon[75880]: 2.10 scrub ok
Dec 01 20:34:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 01 20:34:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 01 20:34:08 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 01 20:34:08 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 01 20:34:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 01 20:34:08 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 01 20:34:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 01 20:34:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 01 20:34:08 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 01 20:34:09 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 01 20:34:09 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 01 20:34:09 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262128830s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 active pruub 121.294456482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:34:09 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:34:09 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:34:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 01 20:34:09 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 01 20:34:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 01 20:34:09 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 01 20:34:09 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 01 20:34:09 compute-0 ceph-mon[75880]: pgmap v122: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:09 compute-0 ceph-mon[75880]: 3.13 scrub starts
Dec 01 20:34:09 compute-0 ceph-mon[75880]: 3.13 scrub ok
Dec 01 20:34:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 01 20:34:09 compute-0 ceph-mon[75880]: osdmap e54: 3 total, 3 up, 3 in
Dec 01 20:34:09 compute-0 ceph-mon[75880]: 5.17 scrub starts
Dec 01 20:34:09 compute-0 ceph-mon[75880]: 5.17 scrub ok
Dec 01 20:34:09 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:34:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:10 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 01 20:34:10 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 01 20:34:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 01 20:34:10 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 01 20:34:10 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 01 20:34:10 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 01 20:34:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 01 20:34:10 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 01 20:34:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 01 20:34:10 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 01 20:34:10 compute-0 ceph-mon[75880]: 7.10 scrub starts
Dec 01 20:34:10 compute-0 ceph-mon[75880]: osdmap e55: 3 total, 3 up, 3 in
Dec 01 20:34:10 compute-0 ceph-mon[75880]: 7.10 scrub ok
Dec 01 20:34:10 compute-0 ceph-mon[75880]: 5.8 scrub starts
Dec 01 20:34:10 compute-0 ceph-mon[75880]: 5.8 scrub ok
Dec 01 20:34:10 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 01 20:34:10 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678371429s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 active pruub 122.300827026s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:34:10 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:34:10 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:34:11 compute-0 python3.9[98310]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:34:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 01 20:34:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 01 20:34:11 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 01 20:34:11 compute-0 ceph-mon[75880]: pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:11 compute-0 ceph-mon[75880]: 7.17 scrub starts
Dec 01 20:34:11 compute-0 ceph-mon[75880]: 7.17 scrub ok
Dec 01 20:34:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 01 20:34:11 compute-0 ceph-mon[75880]: osdmap e56: 3 total, 3 up, 3 in
Dec 01 20:34:11 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:34:12 compute-0 python3.9[98460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:34:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 01 20:34:12 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 01 20:34:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 01 20:34:12 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 01 20:34:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 01 20:34:12 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 01 20:34:12 compute-0 ceph-mon[75880]: osdmap e57: 3 total, 3 up, 3 in
Dec 01 20:34:12 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 01 20:34:13 compute-0 python3.9[98614]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:34:13 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 01 20:34:13 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 01 20:34:13 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 01 20:34:13 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 01 20:34:13 compute-0 ceph-mon[75880]: pgmap v128: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:13 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 01 20:34:13 compute-0 ceph-mon[75880]: osdmap e58: 3 total, 3 up, 3 in
Dec 01 20:34:14 compute-0 sudo[98770]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgddrdpnatlnyiakkpcoledyhxjxcfvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621253.766813-125-212306142445971/AnsiballZ_setup.py'
Dec 01 20:34:14 compute-0 sudo[98770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:34:14 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 01 20:34:14 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 01 20:34:14 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290416718s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 active pruub 129.917236328s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:34:14 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:34:14 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:34:14 compute-0 python3.9[98772]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:34:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 01 20:34:14 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 01 20:34:14 compute-0 sudo[98770]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 01 20:34:14 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 01 20:34:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 01 20:34:14 compute-0 ceph-mon[75880]: 7.16 scrub starts
Dec 01 20:34:14 compute-0 ceph-mon[75880]: 7.16 scrub ok
Dec 01 20:34:14 compute-0 ceph-mon[75880]: 4.b scrub starts
Dec 01 20:34:14 compute-0 ceph-mon[75880]: 4.b scrub ok
Dec 01 20:34:14 compute-0 ceph-mon[75880]: 2.e scrub starts
Dec 01 20:34:14 compute-0 ceph-mon[75880]: 2.e scrub ok
Dec 01 20:34:14 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 01 20:34:14 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 01 20:34:14 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:34:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:15 compute-0 sudo[98854]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kynkthnbzqixgvofxrouaubnmbkfzzoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621253.766813-125-212306142445971/AnsiballZ_dnf.py'
Dec 01 20:34:15 compute-0 sudo[98854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:34:15 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 01 20:34:15 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 01 20:34:15 compute-0 python3.9[98856]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:34:15 compute-0 ceph-mon[75880]: pgmap v130: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 01 20:34:15 compute-0 ceph-mon[75880]: osdmap e59: 3 total, 3 up, 3 in
Dec 01 20:34:15 compute-0 ceph-mon[75880]: 5.a scrub starts
Dec 01 20:34:15 compute-0 ceph-mon[75880]: 5.a scrub ok
Dec 01 20:34:16 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 01 20:34:16 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 01 20:34:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 01 20:34:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 01 20:34:16 compute-0 sudo[98891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:34:16 compute-0 sudo[98891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:16 compute-0 sudo[98891]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:16 compute-0 sudo[98919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:34:16 compute-0 sudo[98919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 01 20:34:16 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 01 20:34:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 01 20:34:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 01 20:34:16 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 01 20:34:16 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 01 20:34:16 compute-0 ceph-mon[75880]: 2.c scrub starts
Dec 01 20:34:16 compute-0 ceph-mon[75880]: 2.c scrub ok
Dec 01 20:34:16 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 01 20:34:17 compute-0 sudo[98919]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:34:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:34:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:34:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:34:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:34:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:34:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:34:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:34:17 compute-0 sudo[98996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:34:17 compute-0 sudo[98996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:17 compute-0 sudo[98996]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:17 compute-0 sudo[99021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:34:17 compute-0 sudo[99021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:17 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863982201s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 active pruub 132.959640503s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:34:17 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:34:17 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:34:17 compute-0 ceph-mon[75880]: pgmap v132: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:17 compute-0 ceph-mon[75880]: 4.6 scrub starts
Dec 01 20:34:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 01 20:34:17 compute-0 ceph-mon[75880]: osdmap e60: 3 total, 3 up, 3 in
Dec 01 20:34:17 compute-0 ceph-mon[75880]: 4.6 scrub ok
Dec 01 20:34:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:34:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:34:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:34:17 compute-0 podman[99059]: 2025-12-01 20:34:17.923444726 +0000 UTC m=+0.052373251 container create 01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:34:17 compute-0 systemd[77264]: Starting Mark boot as successful...
Dec 01 20:34:17 compute-0 systemd[1]: Started libpod-conmon-01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e.scope.
Dec 01 20:34:17 compute-0 systemd[77264]: Finished Mark boot as successful.
Dec 01 20:34:17 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:34:17 compute-0 podman[99059]: 2025-12-01 20:34:17.979904921 +0000 UTC m=+0.108833416 container init 01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:34:17 compute-0 podman[99059]: 2025-12-01 20:34:17.892731225 +0000 UTC m=+0.021659750 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:34:17 compute-0 podman[99059]: 2025-12-01 20:34:17.988403654 +0000 UTC m=+0.117332139 container start 01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 01 20:34:17 compute-0 podman[99059]: 2025-12-01 20:34:17.991619443 +0000 UTC m=+0.120547928 container attach 01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:34:17 compute-0 systemd[1]: libpod-01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e.scope: Deactivated successfully.
Dec 01 20:34:17 compute-0 interesting_moser[99076]: 167 167
Dec 01 20:34:17 compute-0 conmon[99076]: conmon 01d15e743124446d7ac4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e.scope/container/memory.events
Dec 01 20:34:17 compute-0 podman[99059]: 2025-12-01 20:34:17.99538512 +0000 UTC m=+0.124313645 container died 01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:34:18 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 01 20:34:18 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 01 20:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e1b1e1d737638f975acf954a304c931db8afcf14af627680d7f610db55cfc0-merged.mount: Deactivated successfully.
Dec 01 20:34:18 compute-0 podman[99059]: 2025-12-01 20:34:18.270546578 +0000 UTC m=+0.399475103 container remove 01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:34:18 compute-0 systemd[1]: libpod-conmon-01d15e743124446d7ac4bb211463f7205bb994ae9e61266f5a4262339a5d548e.scope: Deactivated successfully.
Dec 01 20:34:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 01 20:34:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 01 20:34:18 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 01 20:34:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 01 20:34:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 01 20:34:18 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 01 20:34:18 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:34:18 compute-0 podman[99102]: 2025-12-01 20:34:18.480925662 +0000 UTC m=+0.048197570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:34:18 compute-0 podman[99102]: 2025-12-01 20:34:18.576522809 +0000 UTC m=+0.143794677 container create 7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:34:18 compute-0 systemd[1]: Started libpod-conmon-7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78.scope.
Dec 01 20:34:18 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f37f67747fc282128198ff46f06ebdca6fa397cd7bd2178de4cae68f28fe26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f37f67747fc282128198ff46f06ebdca6fa397cd7bd2178de4cae68f28fe26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f37f67747fc282128198ff46f06ebdca6fa397cd7bd2178de4cae68f28fe26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f37f67747fc282128198ff46f06ebdca6fa397cd7bd2178de4cae68f28fe26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f37f67747fc282128198ff46f06ebdca6fa397cd7bd2178de4cae68f28fe26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:18 compute-0 podman[99102]: 2025-12-01 20:34:18.672905739 +0000 UTC m=+0.240177587 container init 7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_diffie, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:34:18 compute-0 podman[99102]: 2025-12-01 20:34:18.685008013 +0000 UTC m=+0.252279841 container start 7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_diffie, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:34:18 compute-0 podman[99102]: 2025-12-01 20:34:18.689636186 +0000 UTC m=+0.256908014 container attach 7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_diffie, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:34:18 compute-0 ceph-mon[75880]: 5.b scrub starts
Dec 01 20:34:18 compute-0 ceph-mon[75880]: 5.b scrub ok
Dec 01 20:34:18 compute-0 ceph-mon[75880]: osdmap e61: 3 total, 3 up, 3 in
Dec 01 20:34:18 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 01 20:34:18 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 01 20:34:18 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 01 20:34:19 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 01 20:34:19 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 01 20:34:19 compute-0 infallible_diffie[99118]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:34:19 compute-0 infallible_diffie[99118]: --> All data devices are unavailable
Dec 01 20:34:19 compute-0 systemd[1]: libpod-7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78.scope: Deactivated successfully.
Dec 01 20:34:19 compute-0 podman[99102]: 2025-12-01 20:34:19.287942156 +0000 UTC m=+0.855214024 container died 7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 20:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8f37f67747fc282128198ff46f06ebdca6fa397cd7bd2178de4cae68f28fe26-merged.mount: Deactivated successfully.
Dec 01 20:34:19 compute-0 podman[99102]: 2025-12-01 20:34:19.346404974 +0000 UTC m=+0.913676842 container remove 7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 01 20:34:19 compute-0 systemd[1]: libpod-conmon-7f7ca15757f50085fb5b03b5a430063c63eddfdda52fa841dd8a418b4345da78.scope: Deactivated successfully.
Dec 01 20:34:19 compute-0 sudo[99021]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 01 20:34:19 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 01 20:34:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 01 20:34:19 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 01 20:34:19 compute-0 sudo[99157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:34:19 compute-0 sudo[99157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:19 compute-0 sudo[99157]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:19 compute-0 sudo[99182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:34:19 compute-0 sudo[99182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:19 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 01 20:34:19 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 01 20:34:19 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 01 20:34:19 compute-0 ceph-mon[75880]: pgmap v135: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 01 20:34:19 compute-0 ceph-mon[75880]: 4.19 scrub starts
Dec 01 20:34:19 compute-0 ceph-mon[75880]: 4.19 scrub ok
Dec 01 20:34:19 compute-0 ceph-mon[75880]: 2.0 scrub starts
Dec 01 20:34:19 compute-0 ceph-mon[75880]: 2.0 scrub ok
Dec 01 20:34:19 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 01 20:34:19 compute-0 ceph-mon[75880]: osdmap e62: 3 total, 3 up, 3 in
Dec 01 20:34:19 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 01 20:34:19 compute-0 podman[99221]: 2025-12-01 20:34:19.962763312 +0000 UTC m=+0.066778265 container create 1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:34:20 compute-0 systemd[1]: Started libpod-conmon-1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352.scope.
Dec 01 20:34:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:20 compute-0 podman[99221]: 2025-12-01 20:34:19.934736206 +0000 UTC m=+0.038751199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:34:20 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:34:20 compute-0 podman[99221]: 2025-12-01 20:34:20.046810801 +0000 UTC m=+0.150825724 container init 1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dirac, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:34:20 compute-0 podman[99221]: 2025-12-01 20:34:20.057047047 +0000 UTC m=+0.161062000 container start 1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:34:20 compute-0 podman[99221]: 2025-12-01 20:34:20.061265608 +0000 UTC m=+0.165280561 container attach 1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dirac, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:34:20 compute-0 interesting_dirac[99237]: 167 167
Dec 01 20:34:20 compute-0 systemd[1]: libpod-1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352.scope: Deactivated successfully.
Dec 01 20:34:20 compute-0 conmon[99237]: conmon 1812b9f25f31e15e4d37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352.scope/container/memory.events
Dec 01 20:34:20 compute-0 podman[99221]: 2025-12-01 20:34:20.065175579 +0000 UTC m=+0.169190492 container died 1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:34:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb996b46a8f2552218b204834d8e294527d0c586339fd6dc68394bebcd3f346f-merged.mount: Deactivated successfully.
Dec 01 20:34:20 compute-0 podman[99221]: 2025-12-01 20:34:20.116784775 +0000 UTC m=+0.220799688 container remove 1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dirac, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:34:20 compute-0 systemd[1]: libpod-conmon-1812b9f25f31e15e4d37fc27b9e41e2c53b1657eefe9441baeb5f71af72eb352.scope: Deactivated successfully.
Dec 01 20:34:20 compute-0 podman[99263]: 2025-12-01 20:34:20.292960902 +0000 UTC m=+0.046965383 container create d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chatterjee, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:34:20 compute-0 systemd[1]: Started libpod-conmon-d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd.scope.
Dec 01 20:34:20 compute-0 podman[99263]: 2025-12-01 20:34:20.271973464 +0000 UTC m=+0.025977985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:34:20 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2aac8e53d951a369ba7b9757780ed6bc02d3eaa131055599a07954454f65e0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2aac8e53d951a369ba7b9757780ed6bc02d3eaa131055599a07954454f65e0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2aac8e53d951a369ba7b9757780ed6bc02d3eaa131055599a07954454f65e0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2aac8e53d951a369ba7b9757780ed6bc02d3eaa131055599a07954454f65e0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:20 compute-0 podman[99263]: 2025-12-01 20:34:20.389406695 +0000 UTC m=+0.143411206 container init d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 01 20:34:20 compute-0 podman[99263]: 2025-12-01 20:34:20.404134269 +0000 UTC m=+0.158138760 container start d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chatterjee, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:34:20 compute-0 podman[99263]: 2025-12-01 20:34:20.408116833 +0000 UTC m=+0.162121314 container attach d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:34:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 01 20:34:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 01 20:34:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]: {
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:     "0": [
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:         {
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "devices": [
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "/dev/loop3"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             ],
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_name": "ceph_lv0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_size": "21470642176",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "name": "ceph_lv0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "tags": {
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cluster_name": "ceph",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.crush_device_class": "",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.encrypted": "0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.objectstore": "bluestore",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osd_id": "0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.type": "block",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.vdo": "0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.with_tpm": "0"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             },
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "type": "block",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "vg_name": "ceph_vg0"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:         }
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:     ],
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:     "1": [
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:         {
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "devices": [
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "/dev/loop4"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             ],
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_name": "ceph_lv1",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_size": "21470642176",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "name": "ceph_lv1",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "tags": {
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cluster_name": "ceph",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.crush_device_class": "",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.encrypted": "0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.objectstore": "bluestore",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osd_id": "1",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.type": "block",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.vdo": "0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.with_tpm": "0"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             },
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "type": "block",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "vg_name": "ceph_vg1"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:         }
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:     ],
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:     "2": [
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:         {
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "devices": [
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "/dev/loop5"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             ],
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_name": "ceph_lv2",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_size": "21470642176",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "name": "ceph_lv2",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "tags": {
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.cluster_name": "ceph",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.crush_device_class": "",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.encrypted": "0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.objectstore": "bluestore",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osd_id": "2",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.type": "block",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.vdo": "0",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:                 "ceph.with_tpm": "0"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             },
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "type": "block",
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:             "vg_name": "ceph_vg2"
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:         }
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]:     ]
Dec 01 20:34:20 compute-0 silly_chatterjee[99280]: }
Dec 01 20:34:20 compute-0 systemd[1]: libpod-d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd.scope: Deactivated successfully.
Dec 01 20:34:20 compute-0 podman[99263]: 2025-12-01 20:34:20.773300195 +0000 UTC m=+0.527304746 container died d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:34:20 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 01 20:34:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2aac8e53d951a369ba7b9757780ed6bc02d3eaa131055599a07954454f65e0a-merged.mount: Deactivated successfully.
Dec 01 20:34:20 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 01 20:34:20 compute-0 podman[99263]: 2025-12-01 20:34:20.83135113 +0000 UTC m=+0.585355601 container remove d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_chatterjee, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:34:20 compute-0 systemd[1]: libpod-conmon-d1b1e3673f0de1cee94a8a6edf5b8f22f1359b4fc2f84406c763b07b60fc92dd.scope: Deactivated successfully.
Dec 01 20:34:20 compute-0 sudo[99182]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 01 20:34:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 01 20:34:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 01 20:34:20 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 01 20:34:20 compute-0 ceph-mon[75880]: 3.10 scrub starts
Dec 01 20:34:20 compute-0 ceph-mon[75880]: 3.10 scrub ok
Dec 01 20:34:20 compute-0 ceph-mon[75880]: 4.3 scrub starts
Dec 01 20:34:20 compute-0 ceph-mon[75880]: 4.3 scrub ok
Dec 01 20:34:20 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 01 20:34:20 compute-0 sudo[99305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:34:20 compute-0 sudo[99305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:20 compute-0 sudo[99305]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:21 compute-0 sudo[99330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:34:21 compute-0 sudo[99330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:21 compute-0 podman[99369]: 2025-12-01 20:34:21.349854262 +0000 UTC m=+0.057733796 container create eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:34:21 compute-0 systemd[1]: Started libpod-conmon-eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c.scope.
Dec 01 20:34:21 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:34:21 compute-0 podman[99369]: 2025-12-01 20:34:21.320430992 +0000 UTC m=+0.028310626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:34:21 compute-0 podman[99369]: 2025-12-01 20:34:21.425822441 +0000 UTC m=+0.133702095 container init eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:34:21 compute-0 podman[99369]: 2025-12-01 20:34:21.431422684 +0000 UTC m=+0.139302208 container start eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:34:21 compute-0 podman[99369]: 2025-12-01 20:34:21.435052136 +0000 UTC m=+0.142931700 container attach eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:34:21 compute-0 epic_heyrovsky[99386]: 167 167
Dec 01 20:34:21 compute-0 systemd[1]: libpod-eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c.scope: Deactivated successfully.
Dec 01 20:34:21 compute-0 podman[99369]: 2025-12-01 20:34:21.440204705 +0000 UTC m=+0.148084229 container died eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:34:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf52f5b22cc49797ab0180f456de820ad261f121a2115c9f0128f3ae01e853e3-merged.mount: Deactivated successfully.
Dec 01 20:34:21 compute-0 podman[99369]: 2025-12-01 20:34:21.478218681 +0000 UTC m=+0.186098235 container remove eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:34:21 compute-0 systemd[1]: libpod-conmon-eb5fd4a7c4c3427593805e27ff32b3412b4cb5a0ddc27eb734ba8109198a611c.scope: Deactivated successfully.
Dec 01 20:34:21 compute-0 podman[99409]: 2025-12-01 20:34:21.694299853 +0000 UTC m=+0.044670662 container create 0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 01 20:34:21 compute-0 systemd[1]: Started libpod-conmon-0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e.scope.
Dec 01 20:34:21 compute-0 podman[99409]: 2025-12-01 20:34:21.677632928 +0000 UTC m=+0.028003767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:34:21 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcdb642e6a1e5d27022ea555519c15f4f6fcbe35622f92f4b00d4f0071b49498/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcdb642e6a1e5d27022ea555519c15f4f6fcbe35622f92f4b00d4f0071b49498/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcdb642e6a1e5d27022ea555519c15f4f6fcbe35622f92f4b00d4f0071b49498/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcdb642e6a1e5d27022ea555519c15f4f6fcbe35622f92f4b00d4f0071b49498/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:21 compute-0 podman[99409]: 2025-12-01 20:34:21.807879504 +0000 UTC m=+0.158250373 container init 0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:34:21 compute-0 podman[99409]: 2025-12-01 20:34:21.820977019 +0000 UTC m=+0.171347828 container start 0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:34:21 compute-0 podman[99409]: 2025-12-01 20:34:21.825040615 +0000 UTC m=+0.175411424 container attach 0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:34:21 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 01 20:34:21 compute-0 ceph-mon[75880]: pgmap v137: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 01 20:34:21 compute-0 ceph-mon[75880]: 7.14 scrub starts
Dec 01 20:34:21 compute-0 ceph-mon[75880]: 7.14 scrub ok
Dec 01 20:34:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 01 20:34:21 compute-0 ceph-mon[75880]: osdmap e63: 3 total, 3 up, 3 in
Dec 01 20:34:21 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 01 20:34:22 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 01 20:34:22 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 01 20:34:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Dec 01 20:34:22 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092677116s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 active pruub 137.919769287s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:34:22 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:34:22 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:34:22 compute-0 lvm[99512]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:34:22 compute-0 lvm[99513]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:34:22 compute-0 lvm[99513]: VG ceph_vg1 finished
Dec 01 20:34:22 compute-0 lvm[99512]: VG ceph_vg0 finished
Dec 01 20:34:22 compute-0 lvm[99515]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:34:22 compute-0 lvm[99515]: VG ceph_vg2 finished
Dec 01 20:34:22 compute-0 goofy_brown[99428]: {}
Dec 01 20:34:22 compute-0 systemd[1]: libpod-0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e.scope: Deactivated successfully.
Dec 01 20:34:22 compute-0 podman[99409]: 2025-12-01 20:34:22.704644613 +0000 UTC m=+1.055015432 container died 0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:34:22 compute-0 systemd[1]: libpod-0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e.scope: Consumed 1.551s CPU time.
Dec 01 20:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcdb642e6a1e5d27022ea555519c15f4f6fcbe35622f92f4b00d4f0071b49498-merged.mount: Deactivated successfully.
Dec 01 20:34:22 compute-0 podman[99409]: 2025-12-01 20:34:22.749325895 +0000 UTC m=+1.099696734 container remove 0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:34:22 compute-0 systemd[1]: libpod-conmon-0a4e871196d8869784325daaef1a70bc2b1caa660d0b698ad05bc1238d14177e.scope: Deactivated successfully.
Dec 01 20:34:22 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 01 20:34:22 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 01 20:34:22 compute-0 sudo[99330]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:34:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:34:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:34:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:34:22 compute-0 sudo[99529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:34:22 compute-0 sudo[99529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:34:22 compute-0 sudo[99529]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 01 20:34:22 compute-0 ceph-mon[75880]: 4.0 scrub starts
Dec 01 20:34:22 compute-0 ceph-mon[75880]: 4.0 scrub ok
Dec 01 20:34:22 compute-0 ceph-mon[75880]: 5.0 scrub starts
Dec 01 20:34:22 compute-0 ceph-mon[75880]: 5.0 scrub ok
Dec 01 20:34:22 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:34:22 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:34:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 01 20:34:22 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 01 20:34:22 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:34:23 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 01 20:34:23 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 01 20:34:23 compute-0 ceph-mon[75880]: pgmap v139: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Dec 01 20:34:23 compute-0 ceph-mon[75880]: 7.b scrub starts
Dec 01 20:34:23 compute-0 ceph-mon[75880]: 7.b scrub ok
Dec 01 20:34:23 compute-0 ceph-mon[75880]: osdmap e64: 3 total, 3 up, 3 in
Dec 01 20:34:23 compute-0 ceph-mon[75880]: 4.c scrub starts
Dec 01 20:34:23 compute-0 ceph-mon[75880]: 4.c scrub ok
Dec 01 20:34:24 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 01 20:34:24 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 01 20:34:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Dec 01 20:34:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:25 compute-0 ceph-mon[75880]: 2.1 scrub starts
Dec 01 20:34:25 compute-0 ceph-mon[75880]: 2.1 scrub ok
Dec 01 20:34:25 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 01 20:34:25 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 01 20:34:26 compute-0 ceph-mon[75880]: pgmap v141: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Dec 01 20:34:26 compute-0 ceph-mon[75880]: 4.15 scrub starts
Dec 01 20:34:26 compute-0 ceph-mon[75880]: 4.15 scrub ok
Dec 01 20:34:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Dec 01 20:34:26 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 01 20:34:27 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 01 20:34:27 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 01 20:34:27 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 01 20:34:27 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 01 20:34:27 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 01 20:34:28 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 01 20:34:28 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 01 20:34:28 compute-0 ceph-mon[75880]: pgmap v142: 177 pgs: 1 peering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Dec 01 20:34:28 compute-0 ceph-mon[75880]: 4.16 scrub starts
Dec 01 20:34:28 compute-0 ceph-mon[75880]: 4.16 scrub ok
Dec 01 20:34:28 compute-0 ceph-mon[75880]: 5.6 scrub starts
Dec 01 20:34:28 compute-0 ceph-mon[75880]: 5.6 scrub ok
Dec 01 20:34:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 B/s, 0 objects/s recovering
Dec 01 20:34:29 compute-0 ceph-mon[75880]: 3.d scrub starts
Dec 01 20:34:29 compute-0 ceph-mon[75880]: 3.d scrub ok
Dec 01 20:34:29 compute-0 ceph-mon[75880]: 4.17 scrub starts
Dec 01 20:34:29 compute-0 ceph-mon[75880]: 4.17 scrub ok
Dec 01 20:34:29 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 01 20:34:29 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 01 20:34:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:30 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 01 20:34:30 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 01 20:34:30 compute-0 ceph-mon[75880]: pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 85 B/s, 0 objects/s recovering
Dec 01 20:34:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 0 objects/s recovering
Dec 01 20:34:30 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 01 20:34:30 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 01 20:34:31 compute-0 ceph-mon[75880]: 3.b scrub starts
Dec 01 20:34:31 compute-0 ceph-mon[75880]: 3.b scrub ok
Dec 01 20:34:31 compute-0 ceph-mon[75880]: 5.e scrub starts
Dec 01 20:34:31 compute-0 ceph-mon[75880]: 5.e scrub ok
Dec 01 20:34:32 compute-0 ceph-mon[75880]: pgmap v144: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 0 objects/s recovering
Dec 01 20:34:32 compute-0 ceph-mon[75880]: 3.2 scrub starts
Dec 01 20:34:32 compute-0 ceph-mon[75880]: 3.2 scrub ok
Dec 01 20:34:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:34:32
Dec 01 20:34:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:34:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:34:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', 'cephfs.cephfs.meta']
Dec 01 20:34:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:34:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:34:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:34:33 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 01 20:34:33 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 01 20:34:34 compute-0 ceph-mon[75880]: pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Dec 01 20:34:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 0 objects/s recovering
Dec 01 20:34:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:35 compute-0 ceph-mon[75880]: 3.0 scrub starts
Dec 01 20:34:35 compute-0 ceph-mon[75880]: 3.0 scrub ok
Dec 01 20:34:35 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 01 20:34:35 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 01 20:34:36 compute-0 ceph-mon[75880]: pgmap v146: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 0 objects/s recovering
Dec 01 20:34:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Dec 01 20:34:37 compute-0 ceph-mon[75880]: 7.0 scrub starts
Dec 01 20:34:37 compute-0 ceph-mon[75880]: 7.0 scrub ok
Dec 01 20:34:37 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 01 20:34:37 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 01 20:34:38 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 01 20:34:38 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 01 20:34:38 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 01 20:34:38 compute-0 ceph-mon[75880]: pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Dec 01 20:34:38 compute-0 ceph-mon[75880]: 5.d scrub starts
Dec 01 20:34:38 compute-0 ceph-mon[75880]: 5.d scrub ok
Dec 01 20:34:38 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 01 20:34:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Dec 01 20:34:39 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 01 20:34:39 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 01 20:34:39 compute-0 ceph-mon[75880]: 7.1b scrub starts
Dec 01 20:34:39 compute-0 ceph-mon[75880]: 7.1b scrub ok
Dec 01 20:34:39 compute-0 ceph-mon[75880]: 5.1b scrub starts
Dec 01 20:34:39 compute-0 ceph-mon[75880]: 5.1b scrub ok
Dec 01 20:34:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:40 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 01 20:34:40 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 01 20:34:40 compute-0 ceph-mon[75880]: pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Dec 01 20:34:40 compute-0 ceph-mon[75880]: 2.1e scrub starts
Dec 01 20:34:40 compute-0 ceph-mon[75880]: 2.1e scrub ok
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:34:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:41 compute-0 ceph-mon[75880]: 4.1b scrub starts
Dec 01 20:34:41 compute-0 ceph-mon[75880]: 4.1b scrub ok
Dec 01 20:34:42 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 01 20:34:42 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 01 20:34:42 compute-0 ceph-mon[75880]: pgmap v149: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:42 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 01 20:34:42 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 01 20:34:43 compute-0 ceph-mon[75880]: 5.14 scrub starts
Dec 01 20:34:43 compute-0 ceph-mon[75880]: 5.14 scrub ok
Dec 01 20:34:43 compute-0 ceph-mon[75880]: pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:44 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 01 20:34:44 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 01 20:34:44 compute-0 ceph-mon[75880]: 3.4 scrub starts
Dec 01 20:34:44 compute-0 ceph-mon[75880]: 3.4 scrub ok
Dec 01 20:34:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:44 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 01 20:34:44 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 01 20:34:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:45 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 01 20:34:45 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 01 20:34:45 compute-0 ceph-mon[75880]: 5.15 scrub starts
Dec 01 20:34:45 compute-0 ceph-mon[75880]: 5.15 scrub ok
Dec 01 20:34:45 compute-0 ceph-mon[75880]: pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:45 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 01 20:34:45 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 01 20:34:46 compute-0 ceph-mon[75880]: 7.7 scrub starts
Dec 01 20:34:46 compute-0 ceph-mon[75880]: 7.7 scrub ok
Dec 01 20:34:46 compute-0 ceph-mon[75880]: 4.1a scrub starts
Dec 01 20:34:46 compute-0 ceph-mon[75880]: 4.1a scrub ok
Dec 01 20:34:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:47 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 01 20:34:47 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 01 20:34:47 compute-0 ceph-mon[75880]: 7.d scrub starts
Dec 01 20:34:47 compute-0 ceph-mon[75880]: 7.d scrub ok
Dec 01 20:34:47 compute-0 ceph-mon[75880]: pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:48 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 01 20:34:48 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 01 20:34:48 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 01 20:34:48 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 01 20:34:48 compute-0 ceph-mon[75880]: 2.13 scrub starts
Dec 01 20:34:48 compute-0 ceph-mon[75880]: 2.13 scrub ok
Dec 01 20:34:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:49 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 01 20:34:49 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 01 20:34:49 compute-0 ceph-mon[75880]: 2.11 scrub starts
Dec 01 20:34:49 compute-0 ceph-mon[75880]: 2.11 scrub ok
Dec 01 20:34:49 compute-0 ceph-mon[75880]: 4.e scrub starts
Dec 01 20:34:49 compute-0 ceph-mon[75880]: 4.e scrub ok
Dec 01 20:34:49 compute-0 ceph-mon[75880]: pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:49 compute-0 sudo[99603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/sbin/cephadm shell
Dec 01 20:34:49 compute-0 sudo[99603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:34:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:50 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 01 20:34:50 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 01 20:34:50 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 01 20:34:50 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 01 20:34:50 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 01 20:34:50 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 01 20:34:50 compute-0 ceph-mon[75880]: 3.1c scrub starts
Dec 01 20:34:50 compute-0 ceph-mon[75880]: 3.1c scrub ok
Dec 01 20:34:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:50 compute-0 podman[99756]: 2025-12-01 20:34:50.652614405 +0000 UTC m=+0.033287573 container create 40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:34:50 compute-0 systemd[1]: Started libpod-conmon-40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef.scope.
Dec 01 20:34:50 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61f9ad8cda4a7d21f5e4d8898cab92621b79c12cfb2ecd2210e448c41c039dc1/merged/root supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61f9ad8cda4a7d21f5e4d8898cab92621b79c12cfb2ecd2210e448c41c039dc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61f9ad8cda4a7d21f5e4d8898cab92621b79c12cfb2ecd2210e448c41c039dc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61f9ad8cda4a7d21f5e4d8898cab92621b79c12cfb2ecd2210e448c41c039dc1/merged/etc/ceph/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61f9ad8cda4a7d21f5e4d8898cab92621b79c12cfb2ecd2210e448c41c039dc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:34:50 compute-0 podman[99756]: 2025-12-01 20:34:50.732008542 +0000 UTC m=+0.112681710 container init 40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:34:50 compute-0 podman[99756]: 2025-12-01 20:34:50.637998421 +0000 UTC m=+0.018671619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:34:50 compute-0 podman[99756]: 2025-12-01 20:34:50.738438035 +0000 UTC m=+0.119111203 container start 40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_napier, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 01 20:34:50 compute-0 podman[99756]: 2025-12-01 20:34:50.741076209 +0000 UTC m=+0.121749377 container attach 40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:34:51 compute-0 ceph-mon[75880]: 7.19 scrub starts
Dec 01 20:34:51 compute-0 ceph-mon[75880]: 7.19 scrub ok
Dec 01 20:34:51 compute-0 ceph-mon[75880]: 3.15 scrub starts
Dec 01 20:34:51 compute-0 ceph-mon[75880]: 3.15 scrub ok
Dec 01 20:34:51 compute-0 ceph-mon[75880]: 3.7 scrub starts
Dec 01 20:34:51 compute-0 ceph-mon[75880]: 3.7 scrub ok
Dec 01 20:34:51 compute-0 ceph-mon[75880]: pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:51 compute-0 lucid_napier[99772]: [45B blob data]
Dec 01 20:34:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 01 20:34:52 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50674950' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:34:52 compute-0 lucid_napier[99772]: [20B blob data]
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     id:     dcf60a89-bba0-58b0-a1bf-d4bde723201b
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     health: HEALTH_OK
Dec 01 20:34:52 compute-0 lucid_napier[99772]:  
Dec 01 20:34:52 compute-0 lucid_napier[99772]:   services:
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     mon: 1 daemons, quorum compute-0 (age 3m) [leader: compute-0]
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     mgr: compute-0.xhvuzu(active, since 3m)
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     mds: 1/1 daemons up
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     osd: 3 osds: 3 up (since 2m), 3 in (since 2m)
Dec 01 20:34:52 compute-0 lucid_napier[99772]:  
Dec 01 20:34:52 compute-0 lucid_napier[99772]:   data:
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     volumes: 1/1 healthy
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     pools:   7 pools, 177 pgs
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     objects: 24 objects, 451 KiB
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     usage:   81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:52 compute-0 lucid_napier[99772]:     pgs:     177 active+clean
Dec 01 20:34:52 compute-0 lucid_napier[99772]:  
Dec 01 20:34:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/50674950' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:34:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 01 20:34:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 01 20:34:53 compute-0 ceph-mon[75880]: pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:53 compute-0 ceph-mon[75880]: 4.2 scrub starts
Dec 01 20:34:53 compute-0 ceph-mon[75880]: 4.2 scrub ok
Dec 01 20:34:54 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 01 20:34:54 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 01 20:34:54 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 01 20:34:54 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 01 20:34:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:54 compute-0 ceph-mon[75880]: 4.18 scrub starts
Dec 01 20:34:54 compute-0 ceph-mon[75880]: 4.18 scrub ok
Dec 01 20:34:54 compute-0 lucid_napier[99772]: [48B blob data]
Dec 01 20:34:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:34:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 01 20:34:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522119708' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 01 20:34:55 compute-0 lucid_napier[99772]: [11B blob data]
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     "mon": {
Dec 01 20:34:55 compute-0 lucid_napier[99772]:         "ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)": 1
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     },
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     "mgr": {
Dec 01 20:34:55 compute-0 lucid_napier[99772]:         "ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)": 1
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     },
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     "osd": {
Dec 01 20:34:55 compute-0 lucid_napier[99772]:         "ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)": 3
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     },
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     "mds": {
Dec 01 20:34:55 compute-0 lucid_napier[99772]:         "ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)": 1
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     },
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     "overall": {
Dec 01 20:34:55 compute-0 lucid_napier[99772]:         "ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)": 6
Dec 01 20:34:55 compute-0 lucid_napier[99772]:     }
Dec 01 20:34:55 compute-0 lucid_napier[99772]: }
Dec 01 20:34:55 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 01 20:34:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 01 20:34:55 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 01 20:34:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 01 20:34:55 compute-0 ceph-mon[75880]: 2.16 scrub starts
Dec 01 20:34:55 compute-0 ceph-mon[75880]: 2.16 scrub ok
Dec 01 20:34:55 compute-0 ceph-mon[75880]: pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:55 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1522119708' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 01 20:34:55 compute-0 ceph-mon[75880]: 4.4 scrub starts
Dec 01 20:34:55 compute-0 ceph-mon[75880]: 7.2 scrub starts
Dec 01 20:34:55 compute-0 ceph-mon[75880]: 4.4 scrub ok
Dec 01 20:34:55 compute-0 ceph-mon[75880]: 7.2 scrub ok
Dec 01 20:34:56 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 01 20:34:56 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 01 20:34:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:56 compute-0 ceph-mon[75880]: 4.7 scrub starts
Dec 01 20:34:56 compute-0 ceph-mon[75880]: 4.7 scrub ok
Dec 01 20:34:56 compute-0 lucid_napier[99772]: [43B blob data]
Dec 01 20:34:56 compute-0 lucid_napier[99772]: exit
Dec 01 20:34:56 compute-0 systemd[1]: libpod-40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef.scope: Deactivated successfully.
Dec 01 20:34:56 compute-0 podman[99756]: 2025-12-01 20:34:56.896772996 +0000 UTC m=+6.277446204 container died 40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_napier, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:34:56 compute-0 systemd[1]: libpod-40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef.scope: Consumed 1.131s CPU time.
Dec 01 20:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-61f9ad8cda4a7d21f5e4d8898cab92621b79c12cfb2ecd2210e448c41c039dc1-merged.mount: Deactivated successfully.
Dec 01 20:34:56 compute-0 podman[99756]: 2025-12-01 20:34:56.954962465 +0000 UTC m=+6.335635633 container remove 40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 01 20:34:56 compute-0 systemd[1]: libpod-conmon-40fde4ed5ad0a1c402b44367fe36cd5e10ace635c34de4a67f0bae8b650c02ef.scope: Deactivated successfully.
Dec 01 20:34:56 compute-0 sudo[99603]: pam_unix(sudo:session): session closed for user root
Dec 01 20:34:57 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 01 20:34:57 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 01 20:34:57 compute-0 ceph-mon[75880]: pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:57 compute-0 ceph-mon[75880]: 7.1 scrub starts
Dec 01 20:34:57 compute-0 ceph-mon[75880]: 7.1 scrub ok
Dec 01 20:34:58 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 01 20:34:58 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 01 20:34:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:58 compute-0 ceph-mon[75880]: 4.d scrub starts
Dec 01 20:34:58 compute-0 ceph-mon[75880]: 4.d scrub ok
Dec 01 20:34:59 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 01 20:34:59 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 01 20:34:59 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 01 20:34:59 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 01 20:34:59 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 01 20:34:59 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 01 20:34:59 compute-0 ceph-mon[75880]: pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:34:59 compute-0 ceph-mon[75880]: 4.5 scrub starts
Dec 01 20:34:59 compute-0 ceph-mon[75880]: 4.5 scrub ok
Dec 01 20:34:59 compute-0 ceph-mon[75880]: 4.1 scrub starts
Dec 01 20:34:59 compute-0 ceph-mon[75880]: 4.1 scrub ok
Dec 01 20:35:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:00 compute-0 ceph-mon[75880]: 3.12 scrub starts
Dec 01 20:35:00 compute-0 ceph-mon[75880]: 3.12 scrub ok
Dec 01 20:35:01 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 01 20:35:01 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 01 20:35:01 compute-0 ceph-mon[75880]: pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:01 compute-0 ceph-mon[75880]: 4.f scrub starts
Dec 01 20:35:01 compute-0 ceph-mon[75880]: 4.f scrub ok
Dec 01 20:35:02 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 01 20:35:02 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 01 20:35:02 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 01 20:35:02 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 01 20:35:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:02 compute-0 ceph-mon[75880]: 4.14 scrub starts
Dec 01 20:35:02 compute-0 ceph-mon[75880]: 4.14 scrub ok
Dec 01 20:35:02 compute-0 ceph-mon[75880]: 4.a scrub starts
Dec 01 20:35:02 compute-0 ceph-mon[75880]: 4.a scrub ok
Dec 01 20:35:03 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 01 20:35:03 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 01 20:35:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:35:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:35:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:35:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:35:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:35:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:35:03 compute-0 ceph-mon[75880]: pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:03 compute-0 ceph-mon[75880]: 7.5 scrub starts
Dec 01 20:35:03 compute-0 ceph-mon[75880]: 7.5 scrub ok
Dec 01 20:35:04 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 01 20:35:04 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 01 20:35:04 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 01 20:35:04 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 01 20:35:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:04 compute-0 ceph-mon[75880]: 2.1b scrub starts
Dec 01 20:35:04 compute-0 ceph-mon[75880]: 2.1b scrub ok
Dec 01 20:35:04 compute-0 ceph-mon[75880]: 3.5 scrub starts
Dec 01 20:35:04 compute-0 ceph-mon[75880]: 3.5 scrub ok
Dec 01 20:35:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 01 20:35:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 01 20:35:05 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 01 20:35:05 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 01 20:35:05 compute-0 ceph-mon[75880]: pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:05 compute-0 ceph-mon[75880]: 7.8 scrub starts
Dec 01 20:35:05 compute-0 ceph-mon[75880]: 7.8 scrub ok
Dec 01 20:35:06 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 01 20:35:06 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 01 20:35:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:06 compute-0 ceph-mon[75880]: 3.17 scrub starts
Dec 01 20:35:06 compute-0 ceph-mon[75880]: 3.17 scrub ok
Dec 01 20:35:06 compute-0 ceph-mon[75880]: 4.9 scrub starts
Dec 01 20:35:06 compute-0 ceph-mon[75880]: 4.9 scrub ok
Dec 01 20:35:07 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 01 20:35:07 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 01 20:35:07 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 01 20:35:07 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 01 20:35:07 compute-0 ceph-mon[75880]: pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:07 compute-0 ceph-mon[75880]: 7.a scrub starts
Dec 01 20:35:07 compute-0 ceph-mon[75880]: 7.a scrub ok
Dec 01 20:35:08 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 01 20:35:08 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 01 20:35:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:08 compute-0 ceph-mon[75880]: 7.13 scrub starts
Dec 01 20:35:08 compute-0 ceph-mon[75880]: 7.13 scrub ok
Dec 01 20:35:09 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 01 20:35:09 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 01 20:35:09 compute-0 ceph-mon[75880]: 7.f scrub starts
Dec 01 20:35:09 compute-0 ceph-mon[75880]: 7.f scrub ok
Dec 01 20:35:09 compute-0 ceph-mon[75880]: pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:10 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 01 20:35:10 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 01 20:35:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:10 compute-0 ceph-mon[75880]: 7.3 scrub starts
Dec 01 20:35:10 compute-0 ceph-mon[75880]: 7.3 scrub ok
Dec 01 20:35:10 compute-0 ceph-mon[75880]: 5.3 scrub starts
Dec 01 20:35:10 compute-0 ceph-mon[75880]: 5.3 scrub ok
Dec 01 20:35:11 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 01 20:35:11 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 01 20:35:11 compute-0 ceph-mon[75880]: pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:11 compute-0 ceph-mon[75880]: 3.e scrub starts
Dec 01 20:35:11 compute-0 ceph-mon[75880]: 3.e scrub ok
Dec 01 20:35:12 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 01 20:35:12 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 01 20:35:12 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 01 20:35:12 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 01 20:35:12 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 01 20:35:12 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 01 20:35:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:12 compute-0 ceph-mon[75880]: 3.6 scrub starts
Dec 01 20:35:12 compute-0 ceph-mon[75880]: 3.6 scrub ok
Dec 01 20:35:12 compute-0 ceph-mon[75880]: 5.11 scrub starts
Dec 01 20:35:12 compute-0 ceph-mon[75880]: 5.11 scrub ok
Dec 01 20:35:12 compute-0 ceph-mon[75880]: 7.11 scrub starts
Dec 01 20:35:12 compute-0 ceph-mon[75880]: 7.11 scrub ok
Dec 01 20:35:13 compute-0 ceph-mon[75880]: pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:15 compute-0 ceph-mon[75880]: pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:16 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 01 20:35:16 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 01 20:35:16 compute-0 sudo[99968]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/vi delorean-antelope-testing.repo
Dec 01 20:35:16 compute-0 sudo[99968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:16 compute-0 ceph-mon[75880]: 4.10 scrub starts
Dec 01 20:35:16 compute-0 ceph-mon[75880]: 4.10 scrub ok
Dec 01 20:35:17 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 01 20:35:17 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 01 20:35:17 compute-0 ceph-mon[75880]: pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:17 compute-0 ceph-mon[75880]: 5.2 scrub starts
Dec 01 20:35:17 compute-0 ceph-mon[75880]: 5.2 scrub ok
Dec 01 20:35:18 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 01 20:35:18 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 01 20:35:18 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 01 20:35:18 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 01 20:35:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:18 compute-0 ceph-mon[75880]: 5.13 scrub starts
Dec 01 20:35:18 compute-0 ceph-mon[75880]: 5.13 scrub ok
Dec 01 20:35:18 compute-0 ceph-mon[75880]: 7.15 scrub starts
Dec 01 20:35:18 compute-0 ceph-mon[75880]: 7.15 scrub ok
Dec 01 20:35:19 compute-0 ceph-mon[75880]: pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:20 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 01 20:35:20 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 01 20:35:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:20 compute-0 ceph-mon[75880]: 2.17 scrub starts
Dec 01 20:35:20 compute-0 ceph-mon[75880]: 2.17 scrub ok
Dec 01 20:35:21 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 01 20:35:21 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 01 20:35:21 compute-0 ceph-mon[75880]: pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:21 compute-0 ceph-mon[75880]: 5.5 scrub starts
Dec 01 20:35:21 compute-0 ceph-mon[75880]: 5.5 scrub ok
Dec 01 20:35:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:22 compute-0 sudo[99968]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:22 compute-0 sudo[99971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:35:22 compute-0 sudo[99971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:22 compute-0 sudo[99971]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:23 compute-0 sudo[99996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:35:23 compute-0 sudo[99996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:23 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 01 20:35:23 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 01 20:35:23 compute-0 sudo[99996]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:35:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:35:23 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:35:23 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:35:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:35:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:35:23 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:35:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:35:23 compute-0 sudo[100052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:35:23 compute-0 sudo[100052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:23 compute-0 sudo[100052]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:23 compute-0 sudo[100077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:35:23 compute-0 sudo[100077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:23 compute-0 ceph-mon[75880]: pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:23 compute-0 ceph-mon[75880]: 4.11 scrub starts
Dec 01 20:35:23 compute-0 ceph-mon[75880]: 4.11 scrub ok
Dec 01 20:35:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:35:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:35:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:35:24 compute-0 podman[100115]: 2025-12-01 20:35:24.005226315 +0000 UTC m=+0.038075159 container create c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:35:24 compute-0 systemd[1]: Started libpod-conmon-c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832.scope.
Dec 01 20:35:24 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:35:24 compute-0 podman[100115]: 2025-12-01 20:35:24.071278226 +0000 UTC m=+0.104127070 container init c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:35:24 compute-0 podman[100115]: 2025-12-01 20:35:24.078706326 +0000 UTC m=+0.111555180 container start c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:35:24 compute-0 quizzical_hawking[100132]: 167 167
Dec 01 20:35:24 compute-0 systemd[1]: libpod-c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832.scope: Deactivated successfully.
Dec 01 20:35:24 compute-0 podman[100115]: 2025-12-01 20:35:24.083699937 +0000 UTC m=+0.116548781 container attach c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:35:24 compute-0 podman[100115]: 2025-12-01 20:35:24.084007746 +0000 UTC m=+0.116856590 container died c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 01 20:35:24 compute-0 podman[100115]: 2025-12-01 20:35:23.990282102 +0000 UTC m=+0.023130976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a1df1691457673b0b732b83bd6365e1eece36ebe06593cbfbf3d6dc0d23001b-merged.mount: Deactivated successfully.
Dec 01 20:35:24 compute-0 podman[100115]: 2025-12-01 20:35:24.12337751 +0000 UTC m=+0.156226354 container remove c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hawking, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:35:24 compute-0 systemd[1]: libpod-conmon-c83f8efddd9821c24e950d24de4382151519e75a750f34f072f2852eb078b832.scope: Deactivated successfully.
Dec 01 20:35:24 compute-0 podman[100157]: 2025-12-01 20:35:24.273693966 +0000 UTC m=+0.051218431 container create 350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:35:24 compute-0 systemd[1]: Started libpod-conmon-350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e.scope.
Dec 01 20:35:24 compute-0 podman[100157]: 2025-12-01 20:35:24.246250709 +0000 UTC m=+0.023775214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:35:24 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207a4deb917bf64cee7d5fd0253565f3fe2b1679faf0cefe2d0abf36a8c57b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207a4deb917bf64cee7d5fd0253565f3fe2b1679faf0cefe2d0abf36a8c57b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207a4deb917bf64cee7d5fd0253565f3fe2b1679faf0cefe2d0abf36a8c57b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207a4deb917bf64cee7d5fd0253565f3fe2b1679faf0cefe2d0abf36a8c57b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f207a4deb917bf64cee7d5fd0253565f3fe2b1679faf0cefe2d0abf36a8c57b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:24 compute-0 podman[100157]: 2025-12-01 20:35:24.362909431 +0000 UTC m=+0.140433936 container init 350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:35:24 compute-0 podman[100157]: 2025-12-01 20:35:24.376433705 +0000 UTC m=+0.153958170 container start 350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:35:24 compute-0 podman[100157]: 2025-12-01 20:35:24.379322347 +0000 UTC m=+0.156846862 container attach 350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:35:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:24 compute-0 great_colden[100174]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:35:24 compute-0 great_colden[100174]: --> All data devices are unavailable
Dec 01 20:35:24 compute-0 systemd[1]: libpod-350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e.scope: Deactivated successfully.
Dec 01 20:35:24 compute-0 podman[100157]: 2025-12-01 20:35:24.913479469 +0000 UTC m=+0.691003934 container died 350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f207a4deb917bf64cee7d5fd0253565f3fe2b1679faf0cefe2d0abf36a8c57b5-merged.mount: Deactivated successfully.
Dec 01 20:35:24 compute-0 podman[100157]: 2025-12-01 20:35:24.959287136 +0000 UTC m=+0.736811601 container remove 350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:35:24 compute-0 systemd[1]: libpod-conmon-350fdb1c3a72d0d159a4b074b2d0187807fce542b3900e9b838d36b1da0d332e.scope: Deactivated successfully.
Dec 01 20:35:24 compute-0 sudo[100077]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:25 compute-0 sudo[100207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:35:25 compute-0 sudo[100207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:25 compute-0 sudo[100207]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:25 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 01 20:35:25 compute-0 sudo[100232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:35:25 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 01 20:35:25 compute-0 sudo[100232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:25 compute-0 podman[100269]: 2025-12-01 20:35:25.406773585 +0000 UTC m=+0.033877670 container create df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaum, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:35:25 compute-0 systemd[1]: Started libpod-conmon-df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5.scope.
Dec 01 20:35:25 compute-0 podman[100269]: 2025-12-01 20:35:25.391601885 +0000 UTC m=+0.018705970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:35:25 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:35:25 compute-0 podman[100269]: 2025-12-01 20:35:25.501009043 +0000 UTC m=+0.128113118 container init df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaum, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:35:25 compute-0 podman[100269]: 2025-12-01 20:35:25.507463096 +0000 UTC m=+0.134567161 container start df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 01 20:35:25 compute-0 wonderful_chaum[100286]: 167 167
Dec 01 20:35:25 compute-0 systemd[1]: libpod-df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5.scope: Deactivated successfully.
Dec 01 20:35:25 compute-0 podman[100269]: 2025-12-01 20:35:25.512067836 +0000 UTC m=+0.139171921 container attach df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 01 20:35:25 compute-0 podman[100269]: 2025-12-01 20:35:25.512398166 +0000 UTC m=+0.139502231 container died df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaum, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 01 20:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bd2bd6037a610ce859163944774962e4ab23594f2333b92530f4d4656de4328-merged.mount: Deactivated successfully.
Dec 01 20:35:25 compute-0 podman[100269]: 2025-12-01 20:35:25.54293648 +0000 UTC m=+0.170040545 container remove df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:35:25 compute-0 systemd[1]: libpod-conmon-df6e2f69564fb2e5c49049e673d315b3a0b2c89802167dfe500aeafc718f07d5.scope: Deactivated successfully.
Dec 01 20:35:25 compute-0 podman[100310]: 2025-12-01 20:35:25.676384238 +0000 UTC m=+0.037038809 container create 7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:35:25 compute-0 systemd[1]: Started libpod-conmon-7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664.scope.
Dec 01 20:35:25 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:35:25 compute-0 podman[100310]: 2025-12-01 20:35:25.656942297 +0000 UTC m=+0.017596888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c39930696e81d892e8862a13ca43d2dca1438152918c99cce45bfb8ec8b67a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c39930696e81d892e8862a13ca43d2dca1438152918c99cce45bfb8ec8b67a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c39930696e81d892e8862a13ca43d2dca1438152918c99cce45bfb8ec8b67a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c39930696e81d892e8862a13ca43d2dca1438152918c99cce45bfb8ec8b67a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:25 compute-0 podman[100310]: 2025-12-01 20:35:25.773534789 +0000 UTC m=+0.134189370 container init 7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Dec 01 20:35:25 compute-0 podman[100310]: 2025-12-01 20:35:25.780511146 +0000 UTC m=+0.141165717 container start 7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_gates, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:35:25 compute-0 podman[100310]: 2025-12-01 20:35:25.783671226 +0000 UTC m=+0.144325787 container attach 7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_gates, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:35:25 compute-0 ceph-mon[75880]: pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:25 compute-0 ceph-mon[75880]: 2.f scrub starts
Dec 01 20:35:25 compute-0 ceph-mon[75880]: 2.f scrub ok
Dec 01 20:35:26 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 01 20:35:26 compute-0 quirky_gates[100327]: {
Dec 01 20:35:26 compute-0 quirky_gates[100327]:     "0": [
Dec 01 20:35:26 compute-0 quirky_gates[100327]:         {
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "devices": [
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "/dev/loop3"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             ],
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_name": "ceph_lv0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_size": "21470642176",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "name": "ceph_lv0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "tags": {
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cluster_name": "ceph",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.crush_device_class": "",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.encrypted": "0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.objectstore": "bluestore",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osd_id": "0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.type": "block",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.vdo": "0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.with_tpm": "0"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             },
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "type": "block",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "vg_name": "ceph_vg0"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:         }
Dec 01 20:35:26 compute-0 quirky_gates[100327]:     ],
Dec 01 20:35:26 compute-0 quirky_gates[100327]:     "1": [
Dec 01 20:35:26 compute-0 quirky_gates[100327]:         {
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "devices": [
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "/dev/loop4"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             ],
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_name": "ceph_lv1",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_size": "21470642176",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "name": "ceph_lv1",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "tags": {
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cluster_name": "ceph",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.crush_device_class": "",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.encrypted": "0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.objectstore": "bluestore",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osd_id": "1",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.type": "block",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.vdo": "0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.with_tpm": "0"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             },
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "type": "block",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "vg_name": "ceph_vg1"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:         }
Dec 01 20:35:26 compute-0 quirky_gates[100327]:     ],
Dec 01 20:35:26 compute-0 quirky_gates[100327]:     "2": [
Dec 01 20:35:26 compute-0 quirky_gates[100327]:         {
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "devices": [
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "/dev/loop5"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             ],
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_name": "ceph_lv2",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_size": "21470642176",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "name": "ceph_lv2",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "tags": {
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.cluster_name": "ceph",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.crush_device_class": "",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.encrypted": "0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.objectstore": "bluestore",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osd_id": "2",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.type": "block",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.vdo": "0",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:                 "ceph.with_tpm": "0"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             },
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "type": "block",
Dec 01 20:35:26 compute-0 quirky_gates[100327]:             "vg_name": "ceph_vg2"
Dec 01 20:35:26 compute-0 quirky_gates[100327]:         }
Dec 01 20:35:26 compute-0 quirky_gates[100327]:     ]
Dec 01 20:35:26 compute-0 quirky_gates[100327]: }
Dec 01 20:35:26 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 01 20:35:26 compute-0 systemd[1]: libpod-7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664.scope: Deactivated successfully.
Dec 01 20:35:26 compute-0 podman[100310]: 2025-12-01 20:35:26.069033405 +0000 UTC m=+0.429687986 container died 7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c39930696e81d892e8862a13ca43d2dca1438152918c99cce45bfb8ec8b67a1-merged.mount: Deactivated successfully.
Dec 01 20:35:26 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 01 20:35:26 compute-0 podman[100310]: 2025-12-01 20:35:26.106930397 +0000 UTC m=+0.467584958 container remove 7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_gates, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:35:26 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 01 20:35:26 compute-0 systemd[1]: libpod-conmon-7e788701087bbedaebeee247c48f8d70be64cad50101593d4b0777f420d0a664.scope: Deactivated successfully.
Dec 01 20:35:26 compute-0 sudo[100232]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:26 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 01 20:35:26 compute-0 sudo[100349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:35:26 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 01 20:35:26 compute-0 sudo[100349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:26 compute-0 sudo[100349]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:26 compute-0 sudo[100374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:35:26 compute-0 sudo[100374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:26 compute-0 podman[100411]: 2025-12-01 20:35:26.529168582 +0000 UTC m=+0.044796439 container create f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 01 20:35:26 compute-0 systemd[1]: Started libpod-conmon-f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244.scope.
Dec 01 20:35:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:35:26 compute-0 podman[100411]: 2025-12-01 20:35:26.508368233 +0000 UTC m=+0.023996080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:35:26 compute-0 podman[100411]: 2025-12-01 20:35:26.608807256 +0000 UTC m=+0.124435173 container init f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:35:26 compute-0 podman[100411]: 2025-12-01 20:35:26.618332436 +0000 UTC m=+0.133960263 container start f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:35:26 compute-0 podman[100411]: 2025-12-01 20:35:26.621720112 +0000 UTC m=+0.137348019 container attach f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:35:26 compute-0 upbeat_diffie[100428]: 167 167
Dec 01 20:35:26 compute-0 systemd[1]: libpod-f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244.scope: Deactivated successfully.
Dec 01 20:35:26 compute-0 podman[100411]: 2025-12-01 20:35:26.623106661 +0000 UTC m=+0.138734488 container died f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 01 20:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf6da7c9cbb15bd8f96acb874a1a87733bea04422fec98f782b1b8435692d8ac-merged.mount: Deactivated successfully.
Dec 01 20:35:26 compute-0 podman[100411]: 2025-12-01 20:35:26.665488391 +0000 UTC m=+0.181116238 container remove f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 01 20:35:26 compute-0 systemd[1]: libpod-conmon-f33ead080b4063acf3a52fee305617a6e78fceaad35ec588c755f07f385a4244.scope: Deactivated successfully.
Dec 01 20:35:26 compute-0 podman[100451]: 2025-12-01 20:35:26.862696535 +0000 UTC m=+0.063297584 container create ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 01 20:35:26 compute-0 systemd[1]: Started libpod-conmon-ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea.scope.
Dec 01 20:35:26 compute-0 ceph-mon[75880]: 4.12 scrub starts
Dec 01 20:35:26 compute-0 ceph-mon[75880]: 4.12 scrub ok
Dec 01 20:35:26 compute-0 ceph-mon[75880]: 7.6 scrub starts
Dec 01 20:35:26 compute-0 ceph-mon[75880]: 7.6 scrub ok
Dec 01 20:35:26 compute-0 ceph-mon[75880]: 4.13 scrub starts
Dec 01 20:35:26 compute-0 ceph-mon[75880]: 4.13 scrub ok
Dec 01 20:35:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a44c4df5e14d25e18e65d94c7db37dec4411826fce327c13a3475a645ab4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a44c4df5e14d25e18e65d94c7db37dec4411826fce327c13a3475a645ab4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a44c4df5e14d25e18e65d94c7db37dec4411826fce327c13a3475a645ab4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0a44c4df5e14d25e18e65d94c7db37dec4411826fce327c13a3475a645ab4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:35:26 compute-0 podman[100451]: 2025-12-01 20:35:26.839420275 +0000 UTC m=+0.040021374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:35:26 compute-0 podman[100451]: 2025-12-01 20:35:26.947162215 +0000 UTC m=+0.147763264 container init ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_shockley, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:35:26 compute-0 podman[100451]: 2025-12-01 20:35:26.954998308 +0000 UTC m=+0.155599317 container start ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:35:26 compute-0 podman[100451]: 2025-12-01 20:35:26.958343082 +0000 UTC m=+0.158944111 container attach ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:35:27 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 01 20:35:27 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 01 20:35:27 compute-0 lvm[100545]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:35:27 compute-0 lvm[100545]: VG ceph_vg0 finished
Dec 01 20:35:27 compute-0 lvm[100546]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:35:27 compute-0 lvm[100546]: VG ceph_vg1 finished
Dec 01 20:35:27 compute-0 lvm[100548]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:35:27 compute-0 lvm[100548]: VG ceph_vg2 finished
Dec 01 20:35:27 compute-0 competent_shockley[100467]: {}
Dec 01 20:35:27 compute-0 systemd[1]: libpod-ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea.scope: Deactivated successfully.
Dec 01 20:35:27 compute-0 systemd[1]: libpod-ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea.scope: Consumed 1.258s CPU time.
Dec 01 20:35:27 compute-0 podman[100451]: 2025-12-01 20:35:27.749148501 +0000 UTC m=+0.949749540 container died ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_shockley, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 01 20:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c0a44c4df5e14d25e18e65d94c7db37dec4411826fce327c13a3475a645ab4d-merged.mount: Deactivated successfully.
Dec 01 20:35:27 compute-0 podman[100451]: 2025-12-01 20:35:27.794448174 +0000 UTC m=+0.995049183 container remove ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_shockley, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:35:27 compute-0 systemd[1]: libpod-conmon-ed8ae340c12d1989dbc8ef24119bd7ec52a21c2547f9a0888f1566260d8899ea.scope: Deactivated successfully.
Dec 01 20:35:27 compute-0 sudo[100374]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:35:27 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:35:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:35:27 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:35:27 compute-0 sudo[100563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:35:27 compute-0 sudo[100563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:35:27 compute-0 sudo[100563]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:27 compute-0 ceph-mon[75880]: pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:27 compute-0 ceph-mon[75880]: 3.3 scrub starts
Dec 01 20:35:27 compute-0 ceph-mon[75880]: 3.3 scrub ok
Dec 01 20:35:27 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:35:27 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:35:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:29 compute-0 sudo[98854]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:29 compute-0 sudo[100737]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvzmkifkwzmiworgznbeluyemqmdrohr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621329.4072828-137-255565718054707/AnsiballZ_command.py'
Dec 01 20:35:29 compute-0 sudo[100737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:29 compute-0 python3.9[100739]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:35:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:30 compute-0 ceph-mon[75880]: pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:30 compute-0 sudo[100737]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:31 compute-0 sudo[101024]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkdvfuezhnxculjfqhbikhaxnqzpshhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621330.6827128-145-95712303481849/AnsiballZ_selinux.py'
Dec 01 20:35:31 compute-0 sudo[101024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:31 compute-0 python3.9[101026]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 01 20:35:31 compute-0 sudo[101024]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:31 compute-0 ceph-mon[75880]: pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:32 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 01 20:35:32 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 01 20:35:32 compute-0 sudo[101150]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/dnf install -y ceph-common
Dec 01 20:35:32 compute-0 sudo[101150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:32 compute-0 sudo[101179]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmsidprtcbftexuhvmtyqmeotdircmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621331.862969-156-210627610577777/AnsiballZ_command.py'
Dec 01 20:35:32 compute-0 sudo[101179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:32 compute-0 python3.9[101181]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 01 20:35:32 compute-0 sudo[101179]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:35:32
Dec 01 20:35:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:35:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:35:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'images', 'cephfs.cephfs.data']
Dec 01 20:35:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:35:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:32 compute-0 sudo[101345]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsykxdmukqaxpmpjueovfxayqpnuoxkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621332.4556704-164-88754306408067/AnsiballZ_file.py'
Dec 01 20:35:32 compute-0 sudo[101345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:32 compute-0 python3.9[101348]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:35:32 compute-0 sudo[101345]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:33 compute-0 ceph-mon[75880]: 2.15 scrub starts
Dec 01 20:35:33 compute-0 ceph-mon[75880]: 2.15 scrub ok
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:35:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:35:33 compute-0 sudo[101505]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxwcafpbpkmayyjfqxlylolgwhzdgsqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621333.0214305-172-274096044738129/AnsiballZ_mount.py'
Dec 01 20:35:33 compute-0 sudo[101505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:33 compute-0 python3.9[101507]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 01 20:35:33 compute-0 sudo[101505]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:33 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 01 20:35:34 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 01 20:35:34 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 01 20:35:34 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 01 20:35:34 compute-0 ceph-mon[75880]: pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:34 compute-0 sudo[101678]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaoxausahwfmmzmuvaqujjmosfadwmiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621334.3725317-200-115515308613618/AnsiballZ_file.py'
Dec 01 20:35:34 compute-0 sudo[101678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:34 compute-0 python3.9[101680]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:35:34 compute-0 sudo[101678]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:35 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 01 20:35:35 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 01 20:35:35 compute-0 sudo[101830]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhirpcvuqgwttasgxrdeqdrflshrafaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621334.9337924-208-13842044731144/AnsiballZ_stat.py'
Dec 01 20:35:35 compute-0 sudo[101830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:35 compute-0 python3.9[101832]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:35:35 compute-0 ceph-mon[75880]: 5.12 scrub starts
Dec 01 20:35:35 compute-0 ceph-mon[75880]: 5.12 scrub ok
Dec 01 20:35:35 compute-0 ceph-mon[75880]: 5.7 scrub starts
Dec 01 20:35:35 compute-0 ceph-mon[75880]: 5.7 scrub ok
Dec 01 20:35:35 compute-0 sudo[101830]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:35 compute-0 sudo[101911]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpwoazsybvrdjejhcambmjmuoawzuixy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621334.9337924-208-13842044731144/AnsiballZ_file.py'
Dec 01 20:35:35 compute-0 sudo[101911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:35 compute-0 python3.9[101913]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:35:35 compute-0 sudo[101911]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:36 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 01 20:35:36 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 01 20:35:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:36 compute-0 ceph-mon[75880]: pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:36 compute-0 ceph-mon[75880]: 3.16 scrub starts
Dec 01 20:35:36 compute-0 ceph-mon[75880]: 3.16 scrub ok
Dec 01 20:35:37 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 01 20:35:37 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 01 20:35:37 compute-0 sudo[102067]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrhydhlkriuwojorfqstqvbzkginxwrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621336.8905463-229-45487134434692/AnsiballZ_stat.py'
Dec 01 20:35:37 compute-0 sudo[102067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:37 compute-0 python3.9[102069]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:35:37 compute-0 sudo[102067]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:37 compute-0 ceph-mon[75880]: 2.2 scrub starts
Dec 01 20:35:37 compute-0 ceph-mon[75880]: 2.2 scrub ok
Dec 01 20:35:37 compute-0 ceph-mon[75880]: pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:38 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 01 20:35:38 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 01 20:35:38 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 01 20:35:38 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 01 20:35:38 compute-0 sudo[102222]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plnsuioqfvwwoligducfpcvwbyuyejoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621337.8188503-242-67309619277172/AnsiballZ_getent.py'
Dec 01 20:35:38 compute-0 sudo[102222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:38 compute-0 python3.9[102224]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 01 20:35:38 compute-0 sudo[102222]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:38 compute-0 sudo[102381]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjykqiyicoshuixlvqepkwjwxygbkmen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621338.6290758-252-56134875229238/AnsiballZ_getent.py'
Dec 01 20:35:38 compute-0 sudo[102381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:38 compute-0 ceph-mon[75880]: 7.9 scrub starts
Dec 01 20:35:38 compute-0 ceph-mon[75880]: 7.9 scrub ok
Dec 01 20:35:38 compute-0 ceph-mon[75880]: 3.c scrub starts
Dec 01 20:35:38 compute-0 ceph-mon[75880]: 3.c scrub ok
Dec 01 20:35:38 compute-0 ceph-mon[75880]: 3.8 scrub starts
Dec 01 20:35:38 compute-0 ceph-mon[75880]: 3.8 scrub ok
Dec 01 20:35:39 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 01 20:35:39 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 01 20:35:39 compute-0 python3.9[102383]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 01 20:35:39 compute-0 sudo[102381]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:39 compute-0 sudo[102535]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myckpcrooiormkfvgmxndvlnmcjncqyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621339.3078904-260-83018900553811/AnsiballZ_group.py'
Dec 01 20:35:39 compute-0 sudo[102535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:39 compute-0 python3.9[102538]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 01 20:35:39 compute-0 sudo[102535]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:40 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 01 20:35:40 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 01 20:35:40 compute-0 ceph-mon[75880]: pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:40 compute-0 ceph-mon[75880]: 7.e scrub starts
Dec 01 20:35:40 compute-0 ceph-mon[75880]: 7.e scrub ok
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:35:40 compute-0 sudo[102696]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynctaepewqbgehizysmtrceqbzdheep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621340.0660112-269-44197935025265/AnsiballZ_file.py'
Dec 01 20:35:40 compute-0 sudo[102696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:40 compute-0 python3.9[102698]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 01 20:35:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:40 compute-0 sudo[102696]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:41 compute-0 sudo[102848]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcixkbvpvdbqihtatbyuwxqksiqsvydu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621340.7602634-280-149631817273640/AnsiballZ_dnf.py'
Dec 01 20:35:41 compute-0 sudo[102848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:41 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 01 20:35:41 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 01 20:35:41 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 01 20:35:41 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 01 20:35:41 compute-0 python3.9[102850]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:35:41 compute-0 ceph-mon[75880]: 7.c scrub starts
Dec 01 20:35:41 compute-0 ceph-mon[75880]: 7.c scrub ok
Dec 01 20:35:42 compute-0 ceph-mon[75880]: pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:42 compute-0 ceph-mon[75880]: 5.4 scrub starts
Dec 01 20:35:42 compute-0 ceph-mon[75880]: 3.11 scrub starts
Dec 01 20:35:42 compute-0 ceph-mon[75880]: 5.4 scrub ok
Dec 01 20:35:42 compute-0 ceph-mon[75880]: 3.11 scrub ok
Dec 01 20:35:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:42 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 01 20:35:43 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 01 20:35:43 compute-0 sudo[102848]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:43 compute-0 ceph-mon[75880]: pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:43 compute-0 ceph-mon[75880]: 5.16 scrub starts
Dec 01 20:35:43 compute-0 ceph-mon[75880]: 5.16 scrub ok
Dec 01 20:35:43 compute-0 sudo[103006]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnarkpegkxdzqqtnafhvhvtzcfeeaauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621343.3551848-288-275553381970487/AnsiballZ_file.py'
Dec 01 20:35:43 compute-0 sudo[103006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:43 compute-0 python3.9[103008]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:35:43 compute-0 sudo[103006]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:44 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 01 20:35:44 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 01 20:35:44 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 01 20:35:44 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 01 20:35:44 compute-0 sudo[103159]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwomprcvlhfwerynzpuyqbdwktkpqync ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621343.9688942-296-82476105028500/AnsiballZ_stat.py'
Dec 01 20:35:44 compute-0 sudo[103159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:44 compute-0 python3.9[103161]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:35:44 compute-0 ceph-mon[75880]: 2.a scrub starts
Dec 01 20:35:44 compute-0 ceph-mon[75880]: 2.a scrub ok
Dec 01 20:35:44 compute-0 sudo[103159]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:44 compute-0 sudo[103238]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfejzrijuhfzspgataoosphzyfylmlyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621343.9688942-296-82476105028500/AnsiballZ_file.py'
Dec 01 20:35:44 compute-0 sudo[103238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:44 compute-0 python3.9[103240]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:35:44 compute-0 sudo[103238]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:45 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 01 20:35:45 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 01 20:35:45 compute-0 sudo[103390]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ordqqtwbgkjypzygwhwribwysvffjkeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621345.01083-309-70967833291817/AnsiballZ_stat.py'
Dec 01 20:35:45 compute-0 sudo[103390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:45 compute-0 ceph-mon[75880]: 7.1c scrub starts
Dec 01 20:35:45 compute-0 ceph-mon[75880]: 7.1c scrub ok
Dec 01 20:35:45 compute-0 ceph-mon[75880]: pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:45 compute-0 python3.9[103392]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:35:45 compute-0 sudo[103390]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:45 compute-0 sudo[103468]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmvtbhgqppfjshrggtousfpdgyyqmrfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621345.01083-309-70967833291817/AnsiballZ_file.py'
Dec 01 20:35:45 compute-0 sudo[103468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:45 compute-0 python3.9[103470]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:35:45 compute-0 sudo[103468]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:46 compute-0 ceph-mon[75880]: 7.1a scrub starts
Dec 01 20:35:46 compute-0 ceph-mon[75880]: 7.1a scrub ok
Dec 01 20:35:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:46 compute-0 sudo[103620]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuvilghrixdhrtosyrvekxqbuordwjfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621346.3513331-324-162806641221337/AnsiballZ_dnf.py'
Dec 01 20:35:46 compute-0 sudo[103620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:46 compute-0 python3.9[103622]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:35:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:35:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 01 20:35:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:35:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 01 20:35:47 compute-0 systemd[1]: Reloading.
Dec 01 20:35:47 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 01 20:35:47 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 01 20:35:47 compute-0 systemd-sysv-generator[103671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:35:47 compute-0 systemd-rc-local-generator[103665]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:35:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:35:48 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 01 20:35:48 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 01 20:35:48 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 01 20:35:48 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 01 20:35:48 compute-0 ceph-mon[75880]: pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:48 compute-0 ceph-mon[75880]: 5.9 scrub starts
Dec 01 20:35:48 compute-0 ceph-mon[75880]: 5.9 scrub ok
Dec 01 20:35:48 compute-0 ceph-mon[75880]: 3.1d scrub starts
Dec 01 20:35:48 compute-0 ceph-mon[75880]: 3.1d scrub ok
Dec 01 20:35:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:48 compute-0 sudo[103620]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:48 compute-0 sudo[101150]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:49 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 01 20:35:49 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 01 20:35:49 compute-0 python3.9[106021]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:35:49 compute-0 ceph-mon[75880]: 2.5 scrub starts
Dec 01 20:35:49 compute-0 ceph-mon[75880]: 2.5 scrub ok
Dec 01 20:35:49 compute-0 ceph-mon[75880]: 3.18 scrub starts
Dec 01 20:35:49 compute-0 ceph-mon[75880]: 3.18 scrub ok
Dec 01 20:35:49 compute-0 ceph-mon[75880]: pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:49 compute-0 ceph-mon[75880]: 2.4 scrub starts
Dec 01 20:35:49 compute-0 ceph-mon[75880]: 2.4 scrub ok
Dec 01 20:35:49 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 01 20:35:49 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 01 20:35:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:50 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:35:50 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:35:50 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.710s CPU time.
Dec 01 20:35:50 compute-0 systemd[1]: run-r75b4b702e2ad439cbcdd22df9b75dfad.service: Deactivated successfully.
Dec 01 20:35:50 compute-0 python3.9[106830]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 01 20:35:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:50 compute-0 ceph-mon[75880]: 2.3 scrub starts
Dec 01 20:35:50 compute-0 ceph-mon[75880]: 2.3 scrub ok
Dec 01 20:35:50 compute-0 python3.9[106980]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:35:51 compute-0 ceph-mon[75880]: pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:51 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 01 20:35:51 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 01 20:35:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 01 20:35:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 01 20:35:52 compute-0 sudo[107130]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvyclduapcmixyphzlrpbhwmolpiatjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621351.4349284-365-191118664602516/AnsiballZ_systemd.py'
Dec 01 20:35:52 compute-0 sudo[107130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:52 compute-0 python3.9[107132]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:35:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:52 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 01 20:35:52 compute-0 ceph-mon[75880]: 2.d scrub starts
Dec 01 20:35:52 compute-0 ceph-mon[75880]: 2.d scrub ok
Dec 01 20:35:52 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 01 20:35:52 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 01 20:35:52 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 01 20:35:52 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 01 20:35:52 compute-0 sudo[107130]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:53 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 01 20:35:53 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 01 20:35:53 compute-0 ceph-mon[75880]: 3.1 scrub starts
Dec 01 20:35:53 compute-0 ceph-mon[75880]: 3.1 scrub ok
Dec 01 20:35:53 compute-0 ceph-mon[75880]: pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:53 compute-0 python3.9[107293]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 01 20:35:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 01 20:35:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 01 20:35:54 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 01 20:35:54 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 01 20:35:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:54 compute-0 ceph-mon[75880]: 7.18 scrub starts
Dec 01 20:35:54 compute-0 ceph-mon[75880]: 7.18 scrub ok
Dec 01 20:35:54 compute-0 ceph-mon[75880]: 2.9 scrub starts
Dec 01 20:35:54 compute-0 ceph-mon[75880]: 2.9 scrub ok
Dec 01 20:35:54 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 01 20:35:54 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 01 20:35:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:35:55 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 01 20:35:55 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 01 20:35:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 01 20:35:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 01 20:35:55 compute-0 ceph-mon[75880]: 3.f scrub starts
Dec 01 20:35:55 compute-0 ceph-mon[75880]: 3.f scrub ok
Dec 01 20:35:55 compute-0 ceph-mon[75880]: pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:55 compute-0 ceph-mon[75880]: 2.6 scrub starts
Dec 01 20:35:55 compute-0 ceph-mon[75880]: 2.6 scrub ok
Dec 01 20:35:55 compute-0 ceph-mon[75880]: 6.8 scrub starts
Dec 01 20:35:55 compute-0 ceph-mon[75880]: 6.8 scrub ok
Dec 01 20:35:55 compute-0 sudo[107445]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnqyjqwwrpooerjzsmfcizgtxlhjzhpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621355.2679281-422-80846427178015/AnsiballZ_systemd.py'
Dec 01 20:35:55 compute-0 sudo[107445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:55 compute-0 python3.9[107447]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:35:56 compute-0 sudo[107445]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:56 compute-0 sudo[107599]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nasxabqhhpjndykvyxuehtzrqappqkgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621356.1416643-422-17382806944889/AnsiballZ_systemd.py'
Dec 01 20:35:56 compute-0 sudo[107599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:35:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:56 compute-0 ceph-mon[75880]: 3.1b scrub starts
Dec 01 20:35:56 compute-0 ceph-mon[75880]: 3.1b scrub ok
Dec 01 20:35:56 compute-0 python3.9[107601]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:35:56 compute-0 sudo[107599]: pam_unix(sudo:session): session closed for user root
Dec 01 20:35:57 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 01 20:35:57 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 01 20:35:57 compute-0 sshd-session[96942]: Connection closed by 192.168.122.30 port 58896
Dec 01 20:35:57 compute-0 sshd-session[96939]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:35:57 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Dec 01 20:35:57 compute-0 systemd[1]: session-36.scope: Consumed 1min 5.627s CPU time.
Dec 01 20:35:57 compute-0 systemd-logind[796]: Session 36 logged out. Waiting for processes to exit.
Dec 01 20:35:57 compute-0 systemd-logind[796]: Removed session 36.
Dec 01 20:35:57 compute-0 ceph-mon[75880]: pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:57 compute-0 ceph-mon[75880]: 6.f scrub starts
Dec 01 20:35:57 compute-0 ceph-mon[75880]: 6.f scrub ok
Dec 01 20:35:58 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 01 20:35:58 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 01 20:35:58 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 01 20:35:58 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 01 20:35:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:35:58 compute-0 ceph-mon[75880]: 2.7 scrub starts
Dec 01 20:35:58 compute-0 ceph-mon[75880]: 2.7 scrub ok
Dec 01 20:35:59 compute-0 ceph-mon[75880]: 7.4 scrub starts
Dec 01 20:35:59 compute-0 ceph-mon[75880]: 7.4 scrub ok
Dec 01 20:35:59 compute-0 ceph-mon[75880]: pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:01 compute-0 ceph-mon[75880]: pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:02 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 01 20:36:02 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 01 20:36:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:02 compute-0 sshd-session[107628]: Accepted publickey for zuul from 192.168.122.30 port 36732 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:36:02 compute-0 systemd-logind[796]: New session 37 of user zuul.
Dec 01 20:36:02 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 01 20:36:02 compute-0 sshd-session[107628]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:36:02 compute-0 ceph-mon[75880]: 7.1f scrub starts
Dec 01 20:36:02 compute-0 ceph-mon[75880]: 7.1f scrub ok
Dec 01 20:36:03 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 01 20:36:03 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 01 20:36:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:36:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:36:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:36:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:36:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:36:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:36:03 compute-0 python3.9[107781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:36:03 compute-0 ceph-mon[75880]: pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:03 compute-0 ceph-mon[75880]: 2.18 scrub starts
Dec 01 20:36:03 compute-0 ceph-mon[75880]: 2.18 scrub ok
Dec 01 20:36:04 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 01 20:36:04 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 01 20:36:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:04 compute-0 sudo[107910]:     zuul : TTY=pts/0 ; PWD=/etc/yum.repos.d ; USER=root ; COMMAND=/bin/vi delorean-antelope-testing.repo
Dec 01 20:36:04 compute-0 sudo[107910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:04 compute-0 sudo[107938]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmkwwphfneznpmpblecdkaqrwcewxucb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621364.1721265-36-256243969373177/AnsiballZ_getent.py'
Dec 01 20:36:04 compute-0 sudo[107938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:04 compute-0 python3.9[107940]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 01 20:36:04 compute-0 sudo[107938]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:04 compute-0 ceph-mon[75880]: 2.19 scrub starts
Dec 01 20:36:04 compute-0 ceph-mon[75880]: 2.19 scrub ok
Dec 01 20:36:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 01 20:36:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 01 20:36:05 compute-0 sudo[108091]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdezfuzydrsgjpjbhvouuazlgcrhjcum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621365.0474808-48-146937274560416/AnsiballZ_setup.py'
Dec 01 20:36:05 compute-0 sudo[108091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:05 compute-0 python3.9[108093]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:36:05 compute-0 sudo[108091]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:06 compute-0 ceph-mon[75880]: pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:06 compute-0 ceph-mon[75880]: 5.1e scrub starts
Dec 01 20:36:06 compute-0 ceph-mon[75880]: 5.1e scrub ok
Dec 01 20:36:06 compute-0 sudo[108175]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyruripvclbneysaplctjputtxgylocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621365.0474808-48-146937274560416/AnsiballZ_dnf.py'
Dec 01 20:36:06 compute-0 sudo[108175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:06 compute-0 python3.9[108177]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 01 20:36:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:07 compute-0 sudo[108175]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:08 compute-0 ceph-mon[75880]: pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:08 compute-0 sudo[107910]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:08 compute-0 sudo[108328]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wourvuiwnarrgbosffrntzvhthgqhzfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621368.0163224-62-158963005010636/AnsiballZ_dnf.py'
Dec 01 20:36:08 compute-0 sudo[108328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:08 compute-0 python3.9[108330]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:36:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:10 compute-0 ceph-mon[75880]: pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:10 compute-0 sudo[108328]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:11 compute-0 sudo[108492]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmxqihlrgxxmknohoroolmwhszusyju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621370.5214772-70-273692162220296/AnsiballZ_systemd.py'
Dec 01 20:36:11 compute-0 sudo[108492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:11 compute-0 ceph-mon[75880]: pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:11 compute-0 python3.9[108494]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:36:11 compute-0 sudo[108492]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:12 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 01 20:36:12 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 01 20:36:12 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 01 20:36:12 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 01 20:36:12 compute-0 python3.9[108647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:36:12 compute-0 ceph-mon[75880]: 5.c scrub starts
Dec 01 20:36:12 compute-0 ceph-mon[75880]: 5.c scrub ok
Dec 01 20:36:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:12 compute-0 sudo[108797]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyxhyokwxqdtfzbjtwpvvgkqsbxkvlwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621372.483792-88-41457499225194/AnsiballZ_sefcontext.py'
Dec 01 20:36:12 compute-0 sudo[108797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:13 compute-0 python3.9[108799]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 01 20:36:13 compute-0 sudo[108797]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:13 compute-0 ceph-mon[75880]: 6.0 scrub starts
Dec 01 20:36:13 compute-0 ceph-mon[75880]: 6.0 scrub ok
Dec 01 20:36:13 compute-0 ceph-mon[75880]: pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:14 compute-0 python3.9[108949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:36:14 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 01 20:36:14 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 01 20:36:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:14 compute-0 sudo[109001]:     zuul : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/bin/bash
Dec 01 20:36:14 compute-0 sudo[109001]: pam_unix(sudo-i:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:14 compute-0 systemd[1]: Starting Hostname Service...
Dec 01 20:36:14 compute-0 systemd[1]: Started Hostname Service.
Dec 01 20:36:14 compute-0 sudo[109136]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clrtedfhnavqpqcfjaefbkgjiihyxvjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621374.5686588-106-38601742241680/AnsiballZ_dnf.py'
Dec 01 20:36:14 compute-0 sudo[109136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:15 compute-0 python3.9[109138]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:36:15 compute-0 ceph-mon[75880]: 6.3 scrub starts
Dec 01 20:36:15 compute-0 ceph-mon[75880]: 6.3 scrub ok
Dec 01 20:36:15 compute-0 ceph-mon[75880]: pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:16 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 01 20:36:16 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 01 20:36:16 compute-0 sudo[109136]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:16 compute-0 ceph-mon[75880]: 5.1 scrub starts
Dec 01 20:36:16 compute-0 ceph-mon[75880]: 5.1 scrub ok
Dec 01 20:36:16 compute-0 sudo[109289]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exavdopkwgbtozgxodoyuczhvnergpok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621376.4613163-114-92427400324003/AnsiballZ_command.py'
Dec 01 20:36:16 compute-0 sudo[109289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:17 compute-0 python3.9[109291]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:36:17 compute-0 sudo[109289]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:18 compute-0 ceph-mon[75880]: pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:18 compute-0 sudo[109576]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtlvvvuvpqrselhnqimhpsxbdzarwobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621378.0308201-122-209990244808791/AnsiballZ_file.py'
Dec 01 20:36:18 compute-0 sudo[109576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:18 compute-0 python3.9[109578]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 01 20:36:18 compute-0 sudo[109576]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:19 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 01 20:36:19 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 01 20:36:19 compute-0 python3.9[109729]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:36:20 compute-0 sudo[109881]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hefvccjtjdfcqqvkzzbdvmrerhmuqsob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621379.6676986-138-257902309689661/AnsiballZ_dnf.py'
Dec 01 20:36:20 compute-0 sudo[109881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:20 compute-0 ceph-mon[75880]: pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:20 compute-0 ceph-mon[75880]: 5.1d scrub starts
Dec 01 20:36:20 compute-0 ceph-mon[75880]: 5.1d scrub ok
Dec 01 20:36:20 compute-0 python3.9[109883]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:36:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:21 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 01 20:36:21 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 01 20:36:21 compute-0 sudo[109881]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:21 compute-0 sudo[110034]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lamnvxyguibmdoqvccpkbhclahtcihfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621381.7390485-147-86800241114125/AnsiballZ_dnf.py'
Dec 01 20:36:21 compute-0 sudo[110034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:22 compute-0 ceph-mon[75880]: pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:22 compute-0 ceph-mon[75880]: 5.f scrub starts
Dec 01 20:36:22 compute-0 ceph-mon[75880]: 5.f scrub ok
Dec 01 20:36:22 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 01 20:36:22 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 01 20:36:22 compute-0 python3.9[110036]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:36:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:23 compute-0 ceph-mon[75880]: 5.1a scrub starts
Dec 01 20:36:23 compute-0 ceph-mon[75880]: 5.1a scrub ok
Dec 01 20:36:23 compute-0 sudo[110034]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:24 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 01 20:36:24 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 01 20:36:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:24 compute-0 ceph-mon[75880]: pgmap v200: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:24 compute-0 sudo[110187]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpcnnofdpbjyjymepgfitihxgwgqbusy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621384.2686372-159-267418108454526/AnsiballZ_stat.py'
Dec 01 20:36:24 compute-0 sudo[110187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:24 compute-0 python3.9[110189]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:36:24 compute-0 sudo[110187]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:25 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 01 20:36:25 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 01 20:36:25 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 01 20:36:25 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 01 20:36:25 compute-0 sudo[110341]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejswrzhprbtjwkwcftbvqcxeyoovtqzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621384.9198468-167-146089283029392/AnsiballZ_slurp.py'
Dec 01 20:36:25 compute-0 sudo[110341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:25 compute-0 python3.9[110343]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 01 20:36:25 compute-0 sudo[110341]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:25 compute-0 ceph-mon[75880]: 6.7 scrub starts
Dec 01 20:36:25 compute-0 ceph-mon[75880]: 6.7 scrub ok
Dec 01 20:36:25 compute-0 ceph-mon[75880]: pgmap v201: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:25 compute-0 ceph-mon[75880]: 5.19 scrub starts
Dec 01 20:36:25 compute-0 ceph-mon[75880]: 5.19 scrub ok
Dec 01 20:36:26 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 01 20:36:26 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 01 20:36:26 compute-0 sshd-session[107631]: Connection closed by 192.168.122.30 port 36732
Dec 01 20:36:26 compute-0 sshd-session[107628]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:36:26 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 01 20:36:26 compute-0 systemd[1]: session-37.scope: Consumed 18.149s CPU time.
Dec 01 20:36:26 compute-0 systemd-logind[796]: Session 37 logged out. Waiting for processes to exit.
Dec 01 20:36:26 compute-0 systemd-logind[796]: Removed session 37.
Dec 01 20:36:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:26 compute-0 ceph-mon[75880]: 6.9 scrub starts
Dec 01 20:36:26 compute-0 ceph-mon[75880]: 6.9 scrub ok
Dec 01 20:36:27 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 01 20:36:27 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 01 20:36:27 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 01 20:36:27 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 01 20:36:27 compute-0 ceph-mon[75880]: 6.a scrub starts
Dec 01 20:36:27 compute-0 ceph-mon[75880]: 6.a scrub ok
Dec 01 20:36:27 compute-0 ceph-mon[75880]: pgmap v202: 177 pgs: 1 active+clean+scrubbing, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:27 compute-0 ceph-mon[75880]: 5.18 scrub starts
Dec 01 20:36:27 compute-0 ceph-mon[75880]: 5.18 scrub ok
Dec 01 20:36:28 compute-0 sudo[110368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:36:28 compute-0 sudo[110368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:28 compute-0 sudo[110368]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:28 compute-0 sudo[110393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:36:28 compute-0 sudo[110393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:28 compute-0 ceph-mon[75880]: 6.5 scrub starts
Dec 01 20:36:28 compute-0 ceph-mon[75880]: 6.5 scrub ok
Dec 01 20:36:28 compute-0 sudo[110393]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:36:28 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:36:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:36:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:36:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:36:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:36:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:36:28 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:36:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:36:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:36:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:36:28 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:36:28 compute-0 sudo[110448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:36:28 compute-0 sudo[110448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:28 compute-0 sudo[110448]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:28 compute-0 sudo[110473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:36:28 compute-0 sudo[110473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:29 compute-0 podman[110509]: 2025-12-01 20:36:29.076683586 +0000 UTC m=+0.048901362 container create a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:36:29 compute-0 systemd[1]: Started libpod-conmon-a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67.scope.
Dec 01 20:36:29 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:36:29 compute-0 podman[110509]: 2025-12-01 20:36:29.054536477 +0000 UTC m=+0.026754233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:36:29 compute-0 podman[110509]: 2025-12-01 20:36:29.159904697 +0000 UTC m=+0.132122473 container init a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:36:29 compute-0 podman[110509]: 2025-12-01 20:36:29.169202626 +0000 UTC m=+0.141420372 container start a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:36:29 compute-0 podman[110509]: 2025-12-01 20:36:29.174031205 +0000 UTC m=+0.146248951 container attach a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:36:29 compute-0 amazing_kalam[110525]: 167 167
Dec 01 20:36:29 compute-0 systemd[1]: libpod-a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67.scope: Deactivated successfully.
Dec 01 20:36:29 compute-0 podman[110509]: 2025-12-01 20:36:29.180318787 +0000 UTC m=+0.152536533 container died a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f87eee352fdf2fb5c094a8cb065bf5b4ab08b7c2f5523c709fe19082e45aa18-merged.mount: Deactivated successfully.
Dec 01 20:36:29 compute-0 podman[110509]: 2025-12-01 20:36:29.235155198 +0000 UTC m=+0.207372944 container remove a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:36:29 compute-0 systemd[1]: libpod-conmon-a9ef19f2fe3a45b243b2deb7d07400a1d63a375e7d993b0bc10332d060b95c67.scope: Deactivated successfully.
Dec 01 20:36:29 compute-0 podman[110548]: 2025-12-01 20:36:29.453660523 +0000 UTC m=+0.072262486 container create 681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:36:29 compute-0 systemd[1]: Started libpod-conmon-681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419.scope.
Dec 01 20:36:29 compute-0 podman[110548]: 2025-12-01 20:36:29.425377887 +0000 UTC m=+0.043979900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:36:29 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe6552fa7e968d1f09f6799bde622cbb79fbb8b4f2f3e36147227cd571ee5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe6552fa7e968d1f09f6799bde622cbb79fbb8b4f2f3e36147227cd571ee5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe6552fa7e968d1f09f6799bde622cbb79fbb8b4f2f3e36147227cd571ee5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe6552fa7e968d1f09f6799bde622cbb79fbb8b4f2f3e36147227cd571ee5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe6552fa7e968d1f09f6799bde622cbb79fbb8b4f2f3e36147227cd571ee5a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:29 compute-0 podman[110548]: 2025-12-01 20:36:29.542584149 +0000 UTC m=+0.161186152 container init 681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_torvalds, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:36:29 compute-0 podman[110548]: 2025-12-01 20:36:29.549715345 +0000 UTC m=+0.168317268 container start 681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_torvalds, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:36:29 compute-0 podman[110548]: 2025-12-01 20:36:29.552796174 +0000 UTC m=+0.171398147 container attach 681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_torvalds, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 20:36:29 compute-0 ceph-mon[75880]: pgmap v203: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:36:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:36:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:36:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:36:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:36:29 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:36:30 compute-0 agitated_torvalds[110565]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:36:30 compute-0 agitated_torvalds[110565]: --> All data devices are unavailable
Dec 01 20:36:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:30 compute-0 systemd[1]: libpod-681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419.scope: Deactivated successfully.
Dec 01 20:36:30 compute-0 podman[110548]: 2025-12-01 20:36:30.04551224 +0000 UTC m=+0.664114173 container died 681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_torvalds, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6fe6552fa7e968d1f09f6799bde622cbb79fbb8b4f2f3e36147227cd571ee5a-merged.mount: Deactivated successfully.
Dec 01 20:36:30 compute-0 podman[110548]: 2025-12-01 20:36:30.091350762 +0000 UTC m=+0.709952685 container remove 681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 01 20:36:30 compute-0 systemd[1]: libpod-conmon-681e977dd7b91baf38e6637c7152ad4588905baff7226876da2fa7f6bc69b419.scope: Deactivated successfully.
Dec 01 20:36:30 compute-0 sudo[110473]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:30 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 01 20:36:30 compute-0 sudo[110596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:36:30 compute-0 sudo[110596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:30 compute-0 sudo[110596]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:30 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 01 20:36:30 compute-0 sudo[110621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:36:30 compute-0 sudo[110621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:30 compute-0 ceph-mon[75880]: 6.4 scrub starts
Dec 01 20:36:30 compute-0 ceph-mon[75880]: 6.4 scrub ok
Dec 01 20:36:30 compute-0 podman[110659]: 2025-12-01 20:36:30.636463692 +0000 UTC m=+0.067016075 container create 3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lamarr, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:36:30 compute-0 systemd[1]: Started libpod-conmon-3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa.scope.
Dec 01 20:36:30 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:36:30 compute-0 podman[110659]: 2025-12-01 20:36:30.614674292 +0000 UTC m=+0.045226665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:36:30 compute-0 podman[110659]: 2025-12-01 20:36:30.723293237 +0000 UTC m=+0.153845640 container init 3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:36:30 compute-0 podman[110659]: 2025-12-01 20:36:30.732520013 +0000 UTC m=+0.163072386 container start 3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:36:30 compute-0 podman[110659]: 2025-12-01 20:36:30.736338073 +0000 UTC m=+0.166890526 container attach 3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:36:30 compute-0 sleepy_lamarr[110678]: 167 167
Dec 01 20:36:30 compute-0 systemd[1]: libpod-3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa.scope: Deactivated successfully.
Dec 01 20:36:30 compute-0 podman[110659]: 2025-12-01 20:36:30.73900304 +0000 UTC m=+0.169555423 container died 3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ac8e0aaee680ff5a3feab491a71ff8a5cd7c98b2e3ddfa0a9eed14540ed5f9f-merged.mount: Deactivated successfully.
Dec 01 20:36:30 compute-0 podman[110659]: 2025-12-01 20:36:30.783611997 +0000 UTC m=+0.214164400 container remove 3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 01 20:36:30 compute-0 systemd[1]: libpod-conmon-3ee58c3214e1b23cba5bd8b8c9ca3c5abf0361bb83ec56d9f14343a92e0825aa.scope: Deactivated successfully.
Dec 01 20:36:31 compute-0 podman[110701]: 2025-12-01 20:36:31.002553724 +0000 UTC m=+0.065335796 container create 0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rubin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:36:31 compute-0 systemd[1]: Started libpod-conmon-0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5.scope.
Dec 01 20:36:31 compute-0 podman[110701]: 2025-12-01 20:36:30.975086152 +0000 UTC m=+0.037868264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:36:31 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5678b92bd65598138957f794f18d1cb388f0b57c32774890fd126ce8ddf29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5678b92bd65598138957f794f18d1cb388f0b57c32774890fd126ce8ddf29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5678b92bd65598138957f794f18d1cb388f0b57c32774890fd126ce8ddf29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e5678b92bd65598138957f794f18d1cb388f0b57c32774890fd126ce8ddf29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:31 compute-0 podman[110701]: 2025-12-01 20:36:31.105156605 +0000 UTC m=+0.167938637 container init 0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 01 20:36:31 compute-0 podman[110701]: 2025-12-01 20:36:31.114622178 +0000 UTC m=+0.177404240 container start 0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:36:31 compute-0 podman[110701]: 2025-12-01 20:36:31.118388466 +0000 UTC m=+0.181170518 container attach 0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]: {
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:     "0": [
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:         {
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "devices": [
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "/dev/loop3"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             ],
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_name": "ceph_lv0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_size": "21470642176",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "name": "ceph_lv0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "tags": {
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cluster_name": "ceph",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.crush_device_class": "",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.encrypted": "0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.objectstore": "bluestore",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osd_id": "0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.type": "block",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.vdo": "0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.with_tpm": "0"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             },
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "type": "block",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "vg_name": "ceph_vg0"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:         }
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:     ],
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:     "1": [
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:         {
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "devices": [
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "/dev/loop4"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             ],
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_name": "ceph_lv1",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_size": "21470642176",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "name": "ceph_lv1",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "tags": {
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cluster_name": "ceph",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.crush_device_class": "",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.encrypted": "0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.objectstore": "bluestore",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osd_id": "1",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.type": "block",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.vdo": "0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.with_tpm": "0"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             },
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "type": "block",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "vg_name": "ceph_vg1"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:         }
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:     ],
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:     "2": [
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:         {
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "devices": [
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "/dev/loop5"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             ],
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_name": "ceph_lv2",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_size": "21470642176",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "name": "ceph_lv2",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "tags": {
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.cluster_name": "ceph",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.crush_device_class": "",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.encrypted": "0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.objectstore": "bluestore",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osd_id": "2",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.type": "block",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.vdo": "0",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:                 "ceph.with_tpm": "0"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             },
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "type": "block",
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:             "vg_name": "ceph_vg2"
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:         }
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]:     ]
Dec 01 20:36:31 compute-0 vibrant_rubin[110717]: }
Dec 01 20:36:31 compute-0 systemd[1]: libpod-0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5.scope: Deactivated successfully.
Dec 01 20:36:31 compute-0 podman[110701]: 2025-12-01 20:36:31.423995334 +0000 UTC m=+0.486777396 container died 0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-31e5678b92bd65598138957f794f18d1cb388f0b57c32774890fd126ce8ddf29-merged.mount: Deactivated successfully.
Dec 01 20:36:31 compute-0 podman[110701]: 2025-12-01 20:36:31.476485049 +0000 UTC m=+0.539267111 container remove 0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_rubin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:36:31 compute-0 systemd[1]: libpod-conmon-0c28326c8a50f32317b2c13c2c543bf83e01c24411c5726d17f0d05ad8bef6d5.scope: Deactivated successfully.
Dec 01 20:36:31 compute-0 sudo[110621]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:31 compute-0 sudo[110739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:36:31 compute-0 sudo[110739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:31 compute-0 sudo[110739]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:31 compute-0 ceph-mon[75880]: pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:31 compute-0 sudo[110764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:36:31 compute-0 sudo[110764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:31 compute-0 podman[110802]: 2025-12-01 20:36:31.889014031 +0000 UTC m=+0.036535375 container create 3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:36:31 compute-0 systemd[1]: Started libpod-conmon-3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca.scope.
Dec 01 20:36:31 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:36:31 compute-0 podman[110802]: 2025-12-01 20:36:31.873280498 +0000 UTC m=+0.020801842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:36:31 compute-0 podman[110802]: 2025-12-01 20:36:31.968568487 +0000 UTC m=+0.116089951 container init 3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:36:31 compute-0 podman[110802]: 2025-12-01 20:36:31.98008616 +0000 UTC m=+0.127607484 container start 3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:36:31 compute-0 intelligent_neumann[110818]: 167 167
Dec 01 20:36:31 compute-0 systemd[1]: libpod-3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca.scope: Deactivated successfully.
Dec 01 20:36:31 compute-0 podman[110802]: 2025-12-01 20:36:31.98427678 +0000 UTC m=+0.131798144 container attach 3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_neumann, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:36:31 compute-0 podman[110802]: 2025-12-01 20:36:31.985052603 +0000 UTC m=+0.132573967 container died 3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-842f44deec166bcd40ae4aabc766cd7daa121203e1c4c6cd08435a7c0af84d3c-merged.mount: Deactivated successfully.
Dec 01 20:36:32 compute-0 podman[110802]: 2025-12-01 20:36:32.033783799 +0000 UTC m=+0.181305163 container remove 3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_neumann, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:36:32 compute-0 systemd[1]: libpod-conmon-3cac50714a4d19ccfea628a6bff82099bade521001d1e1eb73913018c4f1f4ca.scope: Deactivated successfully.
Dec 01 20:36:32 compute-0 sshd-session[110836]: Accepted publickey for zuul from 192.168.122.30 port 56456 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:36:32 compute-0 podman[110843]: 2025-12-01 20:36:32.223124712 +0000 UTC m=+0.037120282 container create 099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:36:32 compute-0 systemd-logind[796]: New session 38 of user zuul.
Dec 01 20:36:32 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 01 20:36:32 compute-0 systemd[1]: Started libpod-conmon-099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f.scope.
Dec 01 20:36:32 compute-0 sshd-session[110836]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:36:32 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860ac27c17cb26311083b16ac73b89a25ef4fd8f074e9f28b5d01420e9852075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860ac27c17cb26311083b16ac73b89a25ef4fd8f074e9f28b5d01420e9852075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860ac27c17cb26311083b16ac73b89a25ef4fd8f074e9f28b5d01420e9852075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860ac27c17cb26311083b16ac73b89a25ef4fd8f074e9f28b5d01420e9852075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:36:32 compute-0 podman[110843]: 2025-12-01 20:36:32.281533667 +0000 UTC m=+0.095529277 container init 099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:36:32 compute-0 podman[110843]: 2025-12-01 20:36:32.288494329 +0000 UTC m=+0.102489929 container start 099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:36:32 compute-0 podman[110843]: 2025-12-01 20:36:32.29136632 +0000 UTC m=+0.105361910 container attach 099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:36:32 compute-0 podman[110843]: 2025-12-01 20:36:32.206278326 +0000 UTC m=+0.020273926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:36:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:36:32
Dec 01 20:36:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:36:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:36:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'backups', 'vms', '.mgr']
Dec 01 20:36:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:36:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:32 compute-0 lvm[111091]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:36:32 compute-0 lvm[111092]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:36:32 compute-0 lvm[111091]: VG ceph_vg0 finished
Dec 01 20:36:32 compute-0 lvm[111092]: VG ceph_vg1 finished
Dec 01 20:36:32 compute-0 lvm[111094]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:36:32 compute-0 lvm[111094]: VG ceph_vg2 finished
Dec 01 20:36:33 compute-0 silly_shannon[110861]: {}
Dec 01 20:36:33 compute-0 systemd[1]: libpod-099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f.scope: Deactivated successfully.
Dec 01 20:36:33 compute-0 systemd[1]: libpod-099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f.scope: Consumed 1.322s CPU time.
Dec 01 20:36:33 compute-0 podman[110843]: 2025-12-01 20:36:33.112110112 +0000 UTC m=+0.926105692 container died 099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-860ac27c17cb26311083b16ac73b89a25ef4fd8f074e9f28b5d01420e9852075-merged.mount: Deactivated successfully.
Dec 01 20:36:33 compute-0 python3.9[111083]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:36:33 compute-0 podman[110843]: 2025-12-01 20:36:33.160927051 +0000 UTC m=+0.974922631 container remove 099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:36:33 compute-0 systemd[1]: libpod-conmon-099dab498591a2cbb4a9c9e97d9f596d1b2bd8478d51b0b2db4b76bdc7900b0f.scope: Deactivated successfully.
Dec 01 20:36:33 compute-0 sudo[110764]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:36:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:36:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:36:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:36:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:36:33 compute-0 sudo[111114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:36:33 compute-0 sudo[111114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:36:33 compute-0 sudo[111114]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:33 compute-0 ceph-mon[75880]: pgmap v205: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:33 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:36:33 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:36:34 compute-0 python3.9[111288]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:36:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:35 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 01 20:36:35 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 01 20:36:35 compute-0 python3.9[111481]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:36:35 compute-0 ceph-mon[75880]: pgmap v206: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:35 compute-0 ceph-mon[75880]: 6.b scrub starts
Dec 01 20:36:35 compute-0 ceph-mon[75880]: 6.b scrub ok
Dec 01 20:36:35 compute-0 sshd-session[110863]: Connection closed by 192.168.122.30 port 56456
Dec 01 20:36:35 compute-0 sshd-session[110836]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:36:35 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Dec 01 20:36:35 compute-0 systemd[1]: session-38.scope: Consumed 2.356s CPU time.
Dec 01 20:36:35 compute-0 systemd-logind[796]: Session 38 logged out. Waiting for processes to exit.
Dec 01 20:36:35 compute-0 systemd-logind[796]: Removed session 38.
Dec 01 20:36:36 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 01 20:36:36 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 01 20:36:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:36 compute-0 ceph-mon[75880]: 6.e scrub starts
Dec 01 20:36:36 compute-0 ceph-mon[75880]: 6.e scrub ok
Dec 01 20:36:37 compute-0 ceph-mon[75880]: pgmap v207: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:38 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 01 20:36:38 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 01 20:36:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:38 compute-0 ceph-mon[75880]: 6.1 scrub starts
Dec 01 20:36:38 compute-0 ceph-mon[75880]: 6.1 scrub ok
Dec 01 20:36:39 compute-0 ceph-mon[75880]: pgmap v208: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:36:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:40 compute-0 sshd-session[111510]: Accepted publickey for zuul from 192.168.122.30 port 36218 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:36:40 compute-0 systemd-logind[796]: New session 39 of user zuul.
Dec 01 20:36:40 compute-0 systemd[1]: Started Session 39 of User zuul.
Dec 01 20:36:40 compute-0 sshd-session[111510]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:36:41 compute-0 ceph-mon[75880]: pgmap v209: 177 pgs: 177 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:41 compute-0 python3.9[111663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:36:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:42 compute-0 python3.9[111817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:36:43 compute-0 sudo[111971]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqkhfzabhpzcmddnmzhfwizgvgvtnwqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621403.2748673-40-76078104066863/AnsiballZ_setup.py'
Dec 01 20:36:43 compute-0 sudo[111971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:43 compute-0 ceph-mon[75880]: pgmap v210: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:43 compute-0 python3.9[111973]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:36:44 compute-0 sudo[111971]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:44 compute-0 sudo[112055]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptxgkmggfgvsjdsietomajqrgpubpuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621403.2748673-40-76078104066863/AnsiballZ_dnf.py'
Dec 01 20:36:44 compute-0 sudo[112055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:44 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 01 20:36:44 compute-0 python3.9[112057]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:36:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:45 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 01 20:36:45 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 01 20:36:45 compute-0 ceph-mon[75880]: pgmap v211: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:45 compute-0 ceph-mon[75880]: 6.6 scrub starts
Dec 01 20:36:45 compute-0 ceph-mon[75880]: 6.6 scrub ok
Dec 01 20:36:46 compute-0 sudo[112055]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:46 compute-0 sudo[112222]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lztthlamdalpqnanlvhjgspbfdocdogt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621406.6910994-52-107593726063492/AnsiballZ_setup.py'
Dec 01 20:36:46 compute-0 sudo[112222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 01 20:36:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 01 20:36:47 compute-0 python3.9[112224]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:36:47 compute-0 sudo[112222]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:47 compute-0 ceph-mon[75880]: pgmap v212: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:47 compute-0 ceph-mon[75880]: 6.2 scrub starts
Dec 01 20:36:47 compute-0 ceph-mon[75880]: 6.2 scrub ok
Dec 01 20:36:48 compute-0 sudo[112417]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjvsxwazczpapchsallbzebtlrdjxwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621407.8664699-63-272613717912640/AnsiballZ_file.py'
Dec 01 20:36:48 compute-0 sudo[112417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:48 compute-0 python3.9[112419]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:36:48 compute-0 sudo[112417]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:49 compute-0 sudo[112569]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvwtryfmmqctigrummwjnrbwpemuhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621408.8133247-71-65958317959624/AnsiballZ_command.py'
Dec 01 20:36:49 compute-0 sudo[112569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:49 compute-0 python3.9[112571]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:36:49 compute-0 sudo[112569]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:49 compute-0 ceph-mon[75880]: pgmap v213: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:50 compute-0 sudo[112734]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wembzgefgukgbtemsyrmhcsiltjmnoqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621409.646138-79-180036163811535/AnsiballZ_stat.py'
Dec 01 20:36:50 compute-0 sudo[112734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:50 compute-0 python3.9[112736]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:36:50 compute-0 sudo[112734]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:50 compute-0 sudo[112813]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqdebrbvmmdmbnjmpmsnlveryqzredbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621409.646138-79-180036163811535/AnsiballZ_file.py'
Dec 01 20:36:50 compute-0 sudo[112813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:50 compute-0 python3.9[112815]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:36:50 compute-0 sudo[112813]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:51 compute-0 sudo[112965]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdxxmpmhpyjqmwvdjqvoubnnwsaqntuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621410.9206076-91-114147960494516/AnsiballZ_stat.py'
Dec 01 20:36:51 compute-0 sudo[112965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:51 compute-0 python3.9[112967]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:36:51 compute-0 sudo[112965]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:51 compute-0 sudo[113043]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxvfdqyaawiypwdlsegybzghozklzao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621410.9206076-91-114147960494516/AnsiballZ_file.py'
Dec 01 20:36:51 compute-0 ceph-mon[75880]: pgmap v214: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:51 compute-0 sudo[113043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:51 compute-0 python3.9[113045]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:36:51 compute-0 sudo[113043]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:52 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 01 20:36:52 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 01 20:36:52 compute-0 sudo[113195]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnbbxxjuvfmfjpiflktrhylnetdrivvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621412.1017928-104-193022938893980/AnsiballZ_ini_file.py'
Dec 01 20:36:52 compute-0 sudo[113195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:52 compute-0 python3.9[113197]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:36:52 compute-0 sudo[113195]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:52 compute-0 ceph-mon[75880]: 6.d scrub starts
Dec 01 20:36:52 compute-0 ceph-mon[75880]: 6.d scrub ok
Dec 01 20:36:53 compute-0 sudo[113347]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixrnhlyqadqoxverqirvhjdjngfkiglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621412.8355188-104-177023475238734/AnsiballZ_ini_file.py'
Dec 01 20:36:53 compute-0 sudo[113347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 01 20:36:53 compute-0 python3.9[113349]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:36:53 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 01 20:36:53 compute-0 sudo[113347]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:53 compute-0 ceph-mon[75880]: pgmap v215: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:53 compute-0 ceph-mon[75880]: 6.c scrub starts
Dec 01 20:36:53 compute-0 ceph-mon[75880]: 6.c scrub ok
Dec 01 20:36:53 compute-0 sudo[113499]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswzmtfuijjunbovtkmtkdeqvofmdeyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621413.4544973-104-152076648142817/AnsiballZ_ini_file.py'
Dec 01 20:36:53 compute-0 sudo[113499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:53 compute-0 python3.9[113501]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:36:54 compute-0 sudo[113499]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:54 compute-0 sudo[113651]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpvebitcaxllpeclyrggbdlummoaljdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621414.1296065-104-217744773157323/AnsiballZ_ini_file.py'
Dec 01 20:36:54 compute-0 sudo[113651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:54 compute-0 python3.9[113653]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:36:54 compute-0 sudo[113651]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:36:55 compute-0 sudo[113803]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egbsmmgpogvtjqjzbykzxhomwbjauhsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621414.8324337-135-266136121615518/AnsiballZ_dnf.py'
Dec 01 20:36:55 compute-0 sudo[113803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:55 compute-0 python3.9[113805]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:36:55 compute-0 ceph-mon[75880]: pgmap v216: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:56 compute-0 sudo[113803]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:57 compute-0 sudo[113956]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rssgxfxzgdiaavznprfiecfyxoifjopw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621416.8595307-146-191450170601104/AnsiballZ_setup.py'
Dec 01 20:36:57 compute-0 sudo[113956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:57 compute-0 python3.9[113958]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:36:57 compute-0 sudo[113956]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:57 compute-0 ceph-mon[75880]: pgmap v217: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:57 compute-0 sudo[114110]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grpeumtwhnqzdbzilfmtcomscdlmawnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621417.582457-154-122030004592890/AnsiballZ_stat.py'
Dec 01 20:36:57 compute-0 sudo[114110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:58 compute-0 python3.9[114112]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:36:58 compute-0 sudo[114110]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:58 compute-0 sudo[114262]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btrahygdxnbbhqmlkwnbyhyftuiafysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621418.3448873-163-136197302150502/AnsiballZ_stat.py'
Dec 01 20:36:58 compute-0 sudo[114262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:58 compute-0 python3.9[114264]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:36:58 compute-0 sudo[114262]: pam_unix(sudo:session): session closed for user root
Dec 01 20:36:59 compute-0 sudo[114414]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpxwblnttcrrsszicxurgkrjvdenvafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621419.054699-173-233461362552878/AnsiballZ_command.py'
Dec 01 20:36:59 compute-0 sudo[114414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:36:59 compute-0 ceph-mon[75880]: pgmap v218: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:36:59 compute-0 python3.9[114416]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:36:59 compute-0 sudo[114414]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:00 compute-0 sudo[114567]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zamklceshynyjbacwtdbbvdhrazxtvjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621419.8334837-183-33940975871638/AnsiballZ_service_facts.py'
Dec 01 20:37:00 compute-0 sudo[114567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:00 compute-0 python3.9[114569]: ansible-service_facts Invoked
Dec 01 20:37:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:00 compute-0 network[114586]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:37:00 compute-0 network[114587]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:37:00 compute-0 network[114588]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:37:01 compute-0 ceph-mon[75880]: pgmap v219: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:37:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:37:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:37:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:37:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:37:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:37:03 compute-0 sudo[114567]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:03 compute-0 ceph-mon[75880]: pgmap v220: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:04 compute-0 sudo[114872]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvqmvfevbvxhkuzibkvhhhdjgkldnzbl ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764621423.857631-198-280124867229357/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764621423.857631-198-280124867229357/args'
Dec 01 20:37:04 compute-0 sudo[114872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:04 compute-0 sudo[114872]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:04 compute-0 sudo[115039]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swaqchqsslemfpydbpnqytzkxqqosojd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621424.553731-209-102147775618227/AnsiballZ_dnf.py'
Dec 01 20:37:04 compute-0 sudo[115039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:05 compute-0 python3.9[115041]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:37:05 compute-0 ceph-mon[75880]: pgmap v221: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:06 compute-0 sudo[115039]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:07 compute-0 sudo[115192]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biqoomresjcgbqhptdgyxeynagrjfycf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621426.9367864-222-37539681707583/AnsiballZ_package_facts.py'
Dec 01 20:37:07 compute-0 sudo[115192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:07 compute-0 python3.9[115194]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 01 20:37:07 compute-0 ceph-mon[75880]: pgmap v222: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:08 compute-0 sudo[115192]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:08 compute-0 sudo[115344]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvlygxkpvyswqnvnxnbiqnuapodvpfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621428.4864206-232-27774456355794/AnsiballZ_stat.py'
Dec 01 20:37:08 compute-0 sudo[115344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:09 compute-0 python3.9[115346]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:09 compute-0 sudo[115344]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:09 compute-0 sudo[115422]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zavsnfnmwinedtoirtztbchhaudlvdrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621428.4864206-232-27774456355794/AnsiballZ_file.py'
Dec 01 20:37:09 compute-0 sudo[115422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:09 compute-0 python3.9[115424]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:09 compute-0 sudo[115422]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:09 compute-0 ceph-mon[75880]: pgmap v223: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:09 compute-0 sudo[115574]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kblksnduofalohqsxcghzinixtufongg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621429.7194598-244-210912234460873/AnsiballZ_stat.py'
Dec 01 20:37:09 compute-0 sudo[115574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:10 compute-0 python3.9[115576]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:10 compute-0 sudo[115574]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:10 compute-0 sudo[115652]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtyppkrdqgnjrxgnppiyhazpfwfqjcco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621429.7194598-244-210912234460873/AnsiballZ_file.py'
Dec 01 20:37:10 compute-0 sudo[115652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:10 compute-0 python3.9[115654]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:10 compute-0 sudo[115652]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:11 compute-0 sudo[115804]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdslajppjbwsojegjchhpjufzmzpmoub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621431.0205584-262-122802753793504/AnsiballZ_lineinfile.py'
Dec 01 20:37:11 compute-0 sudo[115804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:11 compute-0 python3.9[115806]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:11 compute-0 sudo[115804]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:12 compute-0 ceph-mon[75880]: pgmap v224: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:12 compute-0 sudo[115956]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uscmsktxeweioswbjxqdbslxujkfbgpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621432.0902672-277-207454985466130/AnsiballZ_setup.py'
Dec 01 20:37:12 compute-0 sudo[115956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:12 compute-0 python3.9[115958]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:37:12 compute-0 sudo[115956]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:13 compute-0 sudo[116040]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnauqhhznxklepnnjhlkirbudknfxqig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621432.0902672-277-207454985466130/AnsiballZ_systemd.py'
Dec 01 20:37:13 compute-0 sudo[116040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:13 compute-0 python3.9[116042]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:37:13 compute-0 sudo[116040]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:14 compute-0 ceph-mon[75880]: pgmap v225: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:14 compute-0 sshd-session[111513]: Connection closed by 192.168.122.30 port 36218
Dec 01 20:37:14 compute-0 sshd-session[111510]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:37:14 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Dec 01 20:37:14 compute-0 systemd[1]: session-39.scope: Consumed 24.375s CPU time.
Dec 01 20:37:14 compute-0 systemd-logind[796]: Session 39 logged out. Waiting for processes to exit.
Dec 01 20:37:14 compute-0 systemd-logind[796]: Removed session 39.
Dec 01 20:37:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:16 compute-0 ceph-mon[75880]: pgmap v226: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:18 compute-0 ceph-mon[75880]: pgmap v227: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:20 compute-0 ceph-mon[75880]: pgmap v228: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:20 compute-0 sshd-session[116069]: Accepted publickey for zuul from 192.168.122.30 port 38732 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:37:20 compute-0 systemd-logind[796]: New session 40 of user zuul.
Dec 01 20:37:20 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 01 20:37:20 compute-0 sshd-session[116069]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:37:21 compute-0 sudo[116222]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdqywycyojkcbjamwbrhngtwahtukzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621440.771608-22-151973160705903/AnsiballZ_file.py'
Dec 01 20:37:21 compute-0 sudo[116222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:21 compute-0 python3.9[116224]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:21 compute-0 sudo[116222]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:22 compute-0 sudo[116374]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozpapieoriixfflmmzyuhhxxbyidcizp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621441.6792223-34-196841998462294/AnsiballZ_stat.py'
Dec 01 20:37:22 compute-0 sudo[116374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:22 compute-0 ceph-mon[75880]: pgmap v229: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:22 compute-0 python3.9[116376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:22 compute-0 sudo[116374]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:22 compute-0 sudo[116452]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxyishgesslmjvfwexhqgcwxdlkttekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621441.6792223-34-196841998462294/AnsiballZ_file.py'
Dec 01 20:37:22 compute-0 sudo[116452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:22 compute-0 python3.9[116454]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:22 compute-0 sudo[116452]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:23 compute-0 systemd[77264]: Created slice User Background Tasks Slice.
Dec 01 20:37:23 compute-0 systemd[77264]: Starting Cleanup of User's Temporary Files and Directories...
Dec 01 20:37:23 compute-0 systemd[77264]: Finished Cleanup of User's Temporary Files and Directories.
Dec 01 20:37:23 compute-0 sshd-session[116072]: Connection closed by 192.168.122.30 port 38732
Dec 01 20:37:23 compute-0 sshd-session[116069]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:37:23 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 01 20:37:23 compute-0 systemd[1]: session-40.scope: Consumed 1.662s CPU time.
Dec 01 20:37:23 compute-0 systemd-logind[796]: Session 40 logged out. Waiting for processes to exit.
Dec 01 20:37:23 compute-0 systemd-logind[796]: Removed session 40.
Dec 01 20:37:24 compute-0 ceph-mon[75880]: pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:26 compute-0 ceph-mon[75880]: pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:27 compute-0 ceph-mon[75880]: pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:28 compute-0 sshd-session[116481]: Accepted publickey for zuul from 192.168.122.30 port 38744 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:37:28 compute-0 systemd-logind[796]: New session 41 of user zuul.
Dec 01 20:37:28 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 01 20:37:28 compute-0 sshd-session[116481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:37:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:29 compute-0 python3.9[116634]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:37:29 compute-0 ceph-mon[75880]: pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:30 compute-0 sudo[116788]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkpwcdwcscqvnncffmwdkvrcvbnpilgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621449.7437837-33-40516471371645/AnsiballZ_file.py'
Dec 01 20:37:30 compute-0 sudo[116788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:30 compute-0 python3.9[116790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:30 compute-0 sudo[116788]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:30 compute-0 sudo[116963]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atyrqffwnngzogswfjmiygyxyqzvejrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621450.5493367-41-225067120187562/AnsiballZ_stat.py'
Dec 01 20:37:31 compute-0 sudo[116963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:31 compute-0 python3.9[116965]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:31 compute-0 sudo[116963]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:31 compute-0 sudo[117041]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zplipmeotbfwtmajsjhrpaedxoiltdpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621450.5493367-41-225067120187562/AnsiballZ_file.py'
Dec 01 20:37:31 compute-0 sudo[117041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:31 compute-0 ceph-mon[75880]: pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:31 compute-0 python3.9[117043]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.laifa6x7 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:31 compute-0 sudo[117041]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:32 compute-0 sudo[117193]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oryesdyqlfyzceqkqxgebjcfbrpfxzxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621451.9116683-61-258157415654447/AnsiballZ_stat.py'
Dec 01 20:37:32 compute-0 sudo[117193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:32 compute-0 python3.9[117195]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:32 compute-0 sudo[117193]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:37:32
Dec 01 20:37:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:37:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:37:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', 'volumes']
Dec 01 20:37:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:37:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:32 compute-0 sudo[117271]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhdimvsqgoarwqgbnxgrwptojdmhkpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621451.9116683-61-258157415654447/AnsiballZ_file.py'
Dec 01 20:37:32 compute-0 sudo[117271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:32 compute-0 python3.9[117273]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.fjbjrome recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:32 compute-0 sudo[117271]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:33 compute-0 sudo[117423]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyouuobkrirvheyydvbpgnhrlwstwggv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621452.9271402-74-126463276402523/AnsiballZ_file.py'
Dec 01 20:37:33 compute-0 sudo[117423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:37:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:37:33 compute-0 python3.9[117425]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:37:33 compute-0 sudo[117426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:37:33 compute-0 sudo[117426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:33 compute-0 sudo[117426]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:33 compute-0 sudo[117423]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:33 compute-0 sudo[117451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:37:33 compute-0 sudo[117451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:33 compute-0 ceph-mon[75880]: pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:33 compute-0 sudo[117642]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauycemiegypwvaiqyiswnzmwtgyoxmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621453.5177388-82-146705029462358/AnsiballZ_stat.py'
Dec 01 20:37:33 compute-0 sudo[117642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:33 compute-0 sudo[117451]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:37:33 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:37:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:37:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:37:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:37:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:37:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:37:33 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:37:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:37:33 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:37:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:37:33 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:37:33 compute-0 python3.9[117644]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:33 compute-0 sudo[117642]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:34 compute-0 sudo[117659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:37:34 compute-0 sudo[117659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:34 compute-0 sudo[117659]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:34 compute-0 sudo[117690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:37:34 compute-0 sudo[117690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:34 compute-0 sudo[117784]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtenayginmlyrvxsviuhqhciumkqqaxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621453.5177388-82-146705029462358/AnsiballZ_file.py'
Dec 01 20:37:34 compute-0 sudo[117784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:34 compute-0 podman[117798]: 2025-12-01 20:37:34.375982752 +0000 UTC m=+0.059984917 container create 52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_antonelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:37:34 compute-0 python3.9[117786]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:37:34 compute-0 sudo[117784]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:34 compute-0 podman[117798]: 2025-12-01 20:37:34.337428645 +0000 UTC m=+0.021430820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:37:34 compute-0 systemd[1]: Started libpod-conmon-52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e.scope.
Dec 01 20:37:34 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:37:34 compute-0 podman[117798]: 2025-12-01 20:37:34.487405188 +0000 UTC m=+0.171407363 container init 52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_antonelli, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 01 20:37:34 compute-0 podman[117798]: 2025-12-01 20:37:34.49586336 +0000 UTC m=+0.179865515 container start 52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 01 20:37:34 compute-0 dazzling_antonelli[117815]: 167 167
Dec 01 20:37:34 compute-0 systemd[1]: libpod-52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e.scope: Deactivated successfully.
Dec 01 20:37:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:34 compute-0 podman[117798]: 2025-12-01 20:37:34.611121201 +0000 UTC m=+0.295123356 container attach 52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:37:34 compute-0 podman[117798]: 2025-12-01 20:37:34.612549123 +0000 UTC m=+0.296551278 container died 52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:37:34 compute-0 sudo[117981]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxvhmnsodnwjdfpfclpznqnheijpbwac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621454.550929-82-135552480673502/AnsiballZ_stat.py'
Dec 01 20:37:34 compute-0 sudo[117981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:37:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:37:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:37:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:37:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:37:34 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e5bd84ec5b697ac2a351331f46c663edb5c4dda2398583bb6a28714dc7def66-merged.mount: Deactivated successfully.
Dec 01 20:37:34 compute-0 podman[117798]: 2025-12-01 20:37:34.938805275 +0000 UTC m=+0.622807420 container remove 52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_antonelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:37:34 compute-0 systemd[1]: libpod-conmon-52b7dadb6436b60e4b45eea2422c56cc3897740a74023f8132dbc7b566ae564e.scope: Deactivated successfully.
Dec 01 20:37:34 compute-0 python3.9[117983]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:35 compute-0 sudo[117981]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:35 compute-0 podman[117993]: 2025-12-01 20:37:35.074775832 +0000 UTC m=+0.022661736 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:37:35 compute-0 podman[117993]: 2025-12-01 20:37:35.189511527 +0000 UTC m=+0.137397421 container create e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chatelet, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:37:35 compute-0 sudo[118080]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhdxzhtmhpbsuprltfwinwemnhsfnchv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621454.550929-82-135552480673502/AnsiballZ_file.py'
Dec 01 20:37:35 compute-0 sudo[118080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:35 compute-0 systemd[1]: Started libpod-conmon-e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a.scope.
Dec 01 20:37:35 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e615d3265bbda0862d6b08aa0c74d033c3fe79a2749938420783eaf2af8802/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e615d3265bbda0862d6b08aa0c74d033c3fe79a2749938420783eaf2af8802/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e615d3265bbda0862d6b08aa0c74d033c3fe79a2749938420783eaf2af8802/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e615d3265bbda0862d6b08aa0c74d033c3fe79a2749938420783eaf2af8802/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e615d3265bbda0862d6b08aa0c74d033c3fe79a2749938420783eaf2af8802/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:35 compute-0 podman[117993]: 2025-12-01 20:37:35.413552846 +0000 UTC m=+0.361438750 container init e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:37:35 compute-0 podman[117993]: 2025-12-01 20:37:35.422403749 +0000 UTC m=+0.370289633 container start e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chatelet, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 20:37:35 compute-0 podman[117993]: 2025-12-01 20:37:35.426845192 +0000 UTC m=+0.374731106 container attach e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chatelet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:37:35 compute-0 python3.9[118082]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:37:35 compute-0 sudo[118080]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:35 compute-0 sudo[118253]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhcwynfpxhsmuoujingmqgetaxyvrlyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621455.6286764-105-261946517038685/AnsiballZ_file.py'
Dec 01 20:37:35 compute-0 sudo[118253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:35 compute-0 dreamy_chatelet[118085]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:37:35 compute-0 dreamy_chatelet[118085]: --> All data devices are unavailable
Dec 01 20:37:35 compute-0 systemd[1]: libpod-e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a.scope: Deactivated successfully.
Dec 01 20:37:35 compute-0 podman[117993]: 2025-12-01 20:37:35.881796363 +0000 UTC m=+0.829682277 container died e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chatelet, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-15e615d3265bbda0862d6b08aa0c74d033c3fe79a2749938420783eaf2af8802-merged.mount: Deactivated successfully.
Dec 01 20:37:35 compute-0 podman[117993]: 2025-12-01 20:37:35.929115792 +0000 UTC m=+0.877001676 container remove e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chatelet, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:37:35 compute-0 ceph-mon[75880]: pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:35 compute-0 systemd[1]: libpod-conmon-e264e22a12cb8a1be44309a9349d5c396c1b141c935c5ab0a4d6a84167f0880a.scope: Deactivated successfully.
Dec 01 20:37:35 compute-0 sudo[117690]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:36 compute-0 sudo[118269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:37:36 compute-0 sudo[118269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:36 compute-0 sudo[118269]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:36 compute-0 python3.9[118256]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:36 compute-0 sudo[118253]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:36 compute-0 sudo[118294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:37:36 compute-0 sudo[118294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:36 compute-0 podman[118407]: 2025-12-01 20:37:36.358981748 +0000 UTC m=+0.041324352 container create 5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:37:36 compute-0 systemd[1]: Started libpod-conmon-5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce.scope.
Dec 01 20:37:36 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:37:36 compute-0 podman[118407]: 2025-12-01 20:37:36.33956347 +0000 UTC m=+0.021906104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:37:36 compute-0 podman[118407]: 2025-12-01 20:37:36.442495193 +0000 UTC m=+0.124837817 container init 5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:37:36 compute-0 podman[118407]: 2025-12-01 20:37:36.44810207 +0000 UTC m=+0.130444684 container start 5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mestorf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 01 20:37:36 compute-0 podman[118407]: 2025-12-01 20:37:36.451230303 +0000 UTC m=+0.133572937 container attach 5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mestorf, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:37:36 compute-0 hardcore_mestorf[118454]: 167 167
Dec 01 20:37:36 compute-0 systemd[1]: libpod-5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce.scope: Deactivated successfully.
Dec 01 20:37:36 compute-0 conmon[118454]: conmon 5540a5ab237705706ef4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce.scope/container/memory.events
Dec 01 20:37:36 compute-0 podman[118407]: 2025-12-01 20:37:36.454447579 +0000 UTC m=+0.136790193 container died 5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-74c538257d1bc0cb6c71f0a063ea2984be5c62bc05501e62a61072bc8b25ecda-merged.mount: Deactivated successfully.
Dec 01 20:37:36 compute-0 sudo[118512]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqbkhbfenccwaqklpqkhrepxglwltxhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621456.2560387-113-66831553653886/AnsiballZ_stat.py'
Dec 01 20:37:36 compute-0 sudo[118512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:36 compute-0 podman[118407]: 2025-12-01 20:37:36.497386177 +0000 UTC m=+0.179728791 container remove 5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:37:36 compute-0 systemd[1]: libpod-conmon-5540a5ab237705706ef41a6a45a39fc55909d8b124386cfefa61c4169cabc1ce.scope: Deactivated successfully.
Dec 01 20:37:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:36 compute-0 podman[118523]: 2025-12-01 20:37:36.652308119 +0000 UTC m=+0.038135296 container create d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:37:36 compute-0 python3.9[118515]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:36 compute-0 systemd[1]: Started libpod-conmon-d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76.scope.
Dec 01 20:37:36 compute-0 sudo[118512]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:36 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ed3f06443f4d0b806b6ddc551fffa17f130f8e5270bdb0f43d76e361dad47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ed3f06443f4d0b806b6ddc551fffa17f130f8e5270bdb0f43d76e361dad47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ed3f06443f4d0b806b6ddc551fffa17f130f8e5270bdb0f43d76e361dad47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451ed3f06443f4d0b806b6ddc551fffa17f130f8e5270bdb0f43d76e361dad47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:36 compute-0 podman[118523]: 2025-12-01 20:37:36.635581921 +0000 UTC m=+0.021409128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:37:36 compute-0 podman[118523]: 2025-12-01 20:37:36.740433311 +0000 UTC m=+0.126260508 container init d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wilbur, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:37:36 compute-0 podman[118523]: 2025-12-01 20:37:36.746795551 +0000 UTC m=+0.132622728 container start d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:37:36 compute-0 podman[118523]: 2025-12-01 20:37:36.754527171 +0000 UTC m=+0.140354378 container attach d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:37:36 compute-0 sudo[118620]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjarblvvcfluiwsnaidcracgftnitfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621456.2560387-113-66831553653886/AnsiballZ_file.py'
Dec 01 20:37:36 compute-0 sudo[118620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]: {
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:     "0": [
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:         {
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "devices": [
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "/dev/loop3"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             ],
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_name": "ceph_lv0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_size": "21470642176",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "name": "ceph_lv0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "tags": {
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cluster_name": "ceph",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.crush_device_class": "",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.encrypted": "0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.objectstore": "bluestore",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osd_id": "0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.type": "block",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.vdo": "0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.with_tpm": "0"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             },
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "type": "block",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "vg_name": "ceph_vg0"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:         }
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:     ],
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:     "1": [
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:         {
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "devices": [
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "/dev/loop4"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             ],
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_name": "ceph_lv1",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_size": "21470642176",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "name": "ceph_lv1",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "tags": {
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cluster_name": "ceph",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.crush_device_class": "",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.encrypted": "0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.objectstore": "bluestore",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osd_id": "1",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.type": "block",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.vdo": "0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.with_tpm": "0"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             },
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "type": "block",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "vg_name": "ceph_vg1"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:         }
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:     ],
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:     "2": [
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:         {
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "devices": [
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "/dev/loop5"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             ],
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_name": "ceph_lv2",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_size": "21470642176",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "name": "ceph_lv2",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "tags": {
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.cluster_name": "ceph",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.crush_device_class": "",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.encrypted": "0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.objectstore": "bluestore",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osd_id": "2",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.type": "block",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.vdo": "0",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:                 "ceph.with_tpm": "0"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             },
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "type": "block",
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:             "vg_name": "ceph_vg2"
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:         }
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]:     ]
Dec 01 20:37:37 compute-0 reverent_wilbur[118542]: }
Dec 01 20:37:37 compute-0 systemd[1]: libpod-d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76.scope: Deactivated successfully.
Dec 01 20:37:37 compute-0 podman[118523]: 2025-12-01 20:37:37.062452996 +0000 UTC m=+0.448280223 container died d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-451ed3f06443f4d0b806b6ddc551fffa17f130f8e5270bdb0f43d76e361dad47-merged.mount: Deactivated successfully.
Dec 01 20:37:37 compute-0 podman[118523]: 2025-12-01 20:37:37.110205998 +0000 UTC m=+0.496033215 container remove d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:37:37 compute-0 systemd[1]: libpod-conmon-d00b7bae67464b7eaf52e36b70f403b34ff336cefaf17067be97ca6cfd638d76.scope: Deactivated successfully.
Dec 01 20:37:37 compute-0 python3.9[118623]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:37 compute-0 sudo[118620]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:37 compute-0 sudo[118294]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:37 compute-0 sudo[118638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:37:37 compute-0 sudo[118638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:37 compute-0 sudo[118638]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:37 compute-0 sudo[118687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:37:37 compute-0 sudo[118687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:37 compute-0 sudo[118856]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twdapnayibekbemcywzxxyiynmoricwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621457.311136-125-14344786725192/AnsiballZ_stat.py'
Dec 01 20:37:37 compute-0 sudo[118856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:37 compute-0 podman[118844]: 2025-12-01 20:37:37.592309458 +0000 UTC m=+0.042331991 container create 88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:37:37 compute-0 systemd[1]: Started libpod-conmon-88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659.scope.
Dec 01 20:37:37 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:37:37 compute-0 podman[118844]: 2025-12-01 20:37:37.652007665 +0000 UTC m=+0.102030218 container init 88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:37:37 compute-0 podman[118844]: 2025-12-01 20:37:37.658031025 +0000 UTC m=+0.108053558 container start 88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:37:37 compute-0 competent_perlman[118868]: 167 167
Dec 01 20:37:37 compute-0 systemd[1]: libpod-88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659.scope: Deactivated successfully.
Dec 01 20:37:37 compute-0 podman[118844]: 2025-12-01 20:37:37.663937651 +0000 UTC m=+0.113960194 container attach 88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:37:37 compute-0 podman[118844]: 2025-12-01 20:37:37.664381763 +0000 UTC m=+0.114404306 container died 88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_perlman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 01 20:37:37 compute-0 podman[118844]: 2025-12-01 20:37:37.572618782 +0000 UTC m=+0.022641355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:37:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-70df108235517fd96661a42374f983e93caeeaf72e1a4efb628f200aca40b796-merged.mount: Deactivated successfully.
Dec 01 20:37:37 compute-0 podman[118844]: 2025-12-01 20:37:37.72845376 +0000 UTC m=+0.178476323 container remove 88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 20:37:37 compute-0 systemd[1]: libpod-conmon-88c32751e25c814c34321d136b5f8de68d283f287c29f5c60688c9ea34ff1659.scope: Deactivated successfully.
Dec 01 20:37:37 compute-0 python3.9[118863]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:37 compute-0 sudo[118856]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:37 compute-0 podman[118918]: 2025-12-01 20:37:37.93035646 +0000 UTC m=+0.050078601 container create f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:37:37 compute-0 ceph-mon[75880]: pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:37 compute-0 systemd[1]: Started libpod-conmon-f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7.scope.
Dec 01 20:37:37 compute-0 podman[118918]: 2025-12-01 20:37:37.902972815 +0000 UTC m=+0.022694996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:37:38 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6a60f11959bddad938fee4b320a8a352ed40175d30abfd07d5ce0ad508c4521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:38 compute-0 sudo[118986]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjouxhjaodkkubtkkystuacahrkkimxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621457.311136-125-14344786725192/AnsiballZ_file.py'
Dec 01 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6a60f11959bddad938fee4b320a8a352ed40175d30abfd07d5ce0ad508c4521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:38 compute-0 sudo[118986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6a60f11959bddad938fee4b320a8a352ed40175d30abfd07d5ce0ad508c4521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6a60f11959bddad938fee4b320a8a352ed40175d30abfd07d5ce0ad508c4521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:37:38 compute-0 podman[118918]: 2025-12-01 20:37:38.035327075 +0000 UTC m=+0.155049246 container init f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 01 20:37:38 compute-0 podman[118918]: 2025-12-01 20:37:38.048106425 +0000 UTC m=+0.167828566 container start f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec 01 20:37:38 compute-0 podman[118918]: 2025-12-01 20:37:38.052018492 +0000 UTC m=+0.171740693 container attach f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:37:38 compute-0 python3.9[118989]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:38 compute-0 sudo[118986]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:38 compute-0 lvm[119142]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:37:38 compute-0 lvm[119142]: VG ceph_vg0 finished
Dec 01 20:37:38 compute-0 lvm[119143]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:37:38 compute-0 lvm[119143]: VG ceph_vg1 finished
Dec 01 20:37:38 compute-0 lvm[119145]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:37:38 compute-0 lvm[119145]: VG ceph_vg2 finished
Dec 01 20:37:38 compute-0 jolly_noether[118984]: {}
Dec 01 20:37:38 compute-0 systemd[1]: libpod-f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7.scope: Deactivated successfully.
Dec 01 20:37:38 compute-0 systemd[1]: libpod-f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7.scope: Consumed 1.247s CPU time.
Dec 01 20:37:38 compute-0 podman[118918]: 2025-12-01 20:37:38.836870914 +0000 UTC m=+0.956593055 container died f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noether, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6a60f11959bddad938fee4b320a8a352ed40175d30abfd07d5ce0ad508c4521-merged.mount: Deactivated successfully.
Dec 01 20:37:38 compute-0 podman[118918]: 2025-12-01 20:37:38.889855461 +0000 UTC m=+1.009577602 container remove f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_noether, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:37:38 compute-0 systemd[1]: libpod-conmon-f7484eb25b0cf01ac329e3d047b7d0f83ae0d54424935ddefa1a7cc9976c10c7.scope: Deactivated successfully.
Dec 01 20:37:38 compute-0 sudo[119231]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euuuwvzwgqkgheuefxhbiarmaijkyyfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621458.3931408-137-170491325116962/AnsiballZ_systemd.py'
Dec 01 20:37:38 compute-0 sudo[118687]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:38 compute-0 sudo[119231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:37:38 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:37:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:37:38 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:37:38 compute-0 sudo[119234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:37:38 compute-0 sudo[119234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:37:38 compute-0 sudo[119234]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:39 compute-0 python3.9[119233]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:37:39 compute-0 systemd[1]: Reloading.
Dec 01 20:37:39 compute-0 systemd-rc-local-generator[119284]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:37:39 compute-0 systemd-sysv-generator[119290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:37:39 compute-0 sudo[119231]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:39 compute-0 ceph-mon[75880]: pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:39 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:37:39 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:37:40 compute-0 sudo[119446]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zskfytlrctiezuravsaysmdljegsthkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621459.7468092-145-162035251037753/AnsiballZ_stat.py'
Dec 01 20:37:40 compute-0 sudo[119446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:40 compute-0 python3.9[119448]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:40 compute-0 sudo[119446]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:37:40 compute-0 sudo[119524]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfmxbzhhpqswoyoybcspldjckqnqjdrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621459.7468092-145-162035251037753/AnsiballZ_file.py'
Dec 01 20:37:40 compute-0 sudo[119524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:40 compute-0 python3.9[119526]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:40 compute-0 sudo[119524]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:41 compute-0 sudo[119676]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwalsiurfguykauhfcvnwbcgshbdrjqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621460.866657-157-14345109148597/AnsiballZ_stat.py'
Dec 01 20:37:41 compute-0 sudo[119676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:41 compute-0 python3.9[119678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:41 compute-0 sudo[119676]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:41 compute-0 sudo[119754]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqptrozfysjziczpacysnbfafvezgfhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621460.866657-157-14345109148597/AnsiballZ_file.py'
Dec 01 20:37:41 compute-0 sudo[119754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:41 compute-0 python3.9[119756]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:41 compute-0 sudo[119754]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:41 compute-0 ceph-mon[75880]: pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:42 compute-0 sudo[119906]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flznuusgykimywjgihdbjobsjirgtmoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621461.996614-169-140089680828237/AnsiballZ_systemd.py'
Dec 01 20:37:42 compute-0 sudo[119906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:42 compute-0 python3.9[119908]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:37:42 compute-0 systemd[1]: Reloading.
Dec 01 20:37:42 compute-0 systemd-rc-local-generator[119938]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:37:42 compute-0 systemd-sysv-generator[119941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:37:42 compute-0 systemd[1]: Starting Create netns directory...
Dec 01 20:37:42 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 01 20:37:42 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 01 20:37:42 compute-0 systemd[1]: Finished Create netns directory.
Dec 01 20:37:43 compute-0 sudo[119906]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:43 compute-0 python3.9[120101]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:37:43 compute-0 network[120118]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:37:43 compute-0 network[120119]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:37:43 compute-0 network[120120]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:37:43 compute-0 ceph-mon[75880]: pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:45 compute-0 ceph-mon[75880]: pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:47 compute-0 ceph-mon[75880]: pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:48 compute-0 sudo[120381]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xowaurebnkoovgwauvoxamgdsfdxhjsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621468.136238-195-77596961044414/AnsiballZ_stat.py'
Dec 01 20:37:48 compute-0 sudo[120381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:48 compute-0 python3.9[120383]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:48 compute-0 sudo[120381]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:48 compute-0 sudo[120459]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmwrnpefyueawaqaiehjzziojaozfbad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621468.136238-195-77596961044414/AnsiballZ_file.py'
Dec 01 20:37:48 compute-0 sudo[120459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:49 compute-0 python3.9[120461]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:49 compute-0 sudo[120459]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:49 compute-0 sudo[120611]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcfclojaxurmaltiwhadlovspgftdxnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621469.2890675-208-107015496659350/AnsiballZ_file.py'
Dec 01 20:37:49 compute-0 sudo[120611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:49 compute-0 python3.9[120613]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:49 compute-0 sudo[120611]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:49 compute-0 ceph-mon[75880]: pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:50 compute-0 sudo[120763]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jojphylvzmjlsdeaagynfyuwexugijzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621469.929708-216-237148697202421/AnsiballZ_stat.py'
Dec 01 20:37:50 compute-0 sudo[120763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:50 compute-0 python3.9[120765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:50 compute-0 sudo[120763]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:50 compute-0 sudo[120841]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykndvwwusguewzcknuibzcxynynuurjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621469.929708-216-237148697202421/AnsiballZ_file.py'
Dec 01 20:37:50 compute-0 sudo[120841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:50 compute-0 python3.9[120843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:50 compute-0 sudo[120841]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:51 compute-0 sudo[120993]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdwoeucizrvanxdscznicjyrqutzutzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621471.2100027-231-176650659708791/AnsiballZ_timezone.py'
Dec 01 20:37:51 compute-0 sudo[120993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:51 compute-0 python3.9[120995]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 01 20:37:51 compute-0 systemd[1]: Starting Time & Date Service...
Dec 01 20:37:51 compute-0 ceph-mon[75880]: pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:52 compute-0 systemd[1]: Started Time & Date Service.
Dec 01 20:37:52 compute-0 sudo[120993]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:52 compute-0 sudo[121149]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnslsettxiovqiwlpcytydazyrkbcxsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621472.3327532-240-216513763317933/AnsiballZ_file.py'
Dec 01 20:37:52 compute-0 sudo[121149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:52 compute-0 python3.9[121151]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:52 compute-0 sudo[121149]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:53 compute-0 sudo[121301]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzlnpjowxjskkuzwrrmfbifqerxuarmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621473.0295467-248-104005021710871/AnsiballZ_stat.py'
Dec 01 20:37:53 compute-0 sudo[121301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:53 compute-0 python3.9[121303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:53 compute-0 sudo[121301]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:53 compute-0 sudo[121379]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkvfvfmzdxuvouurnylivklsqjhivnnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621473.0295467-248-104005021710871/AnsiballZ_file.py'
Dec 01 20:37:53 compute-0 sudo[121379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:53 compute-0 python3.9[121381]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:53 compute-0 sudo[121379]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:54 compute-0 ceph-mon[75880]: pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:54 compute-0 sudo[121531]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcfjssrhyitrivzcuuqijunkwzluypqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621474.0548697-260-159810738766130/AnsiballZ_stat.py'
Dec 01 20:37:54 compute-0 sudo[121531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:54 compute-0 python3.9[121533]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:54 compute-0 sudo[121531]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:54 compute-0 sudo[121609]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldxoirrjcnfowjbchxcsolpbtfocnjby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621474.0548697-260-159810738766130/AnsiballZ_file.py'
Dec 01 20:37:54 compute-0 sudo[121609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:37:56 compute-0 ceph-mon[75880]: pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:56 compute-0 python3.9[121611]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ae8b5p85 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:56 compute-0 sudo[121609]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:56 compute-0 sudo[121761]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwtrrfbhsozglnctwrgqlcxszakuadb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621476.5579886-272-67633002343598/AnsiballZ_stat.py'
Dec 01 20:37:56 compute-0 sudo[121761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:57 compute-0 python3.9[121763]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:37:57 compute-0 sudo[121761]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:57 compute-0 sudo[121839]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iihmbpliebwhwguwdrxmmvrrrrgacvlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621476.5579886-272-67633002343598/AnsiballZ_file.py'
Dec 01 20:37:57 compute-0 sudo[121839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:57 compute-0 python3.9[121841]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:37:57 compute-0 sudo[121839]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:58 compute-0 sudo[121991]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwiztetwfcrwlsyhgecqnoxpuqzwkpia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621477.6900516-285-211179616779946/AnsiballZ_command.py'
Dec 01 20:37:58 compute-0 sudo[121991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:58 compute-0 ceph-mon[75880]: pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:58 compute-0 python3.9[121993]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:37:58 compute-0 sudo[121991]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:37:59 compute-0 sudo[122144]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imomxsvaxqnjylowoedsheylqajbczvo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621478.5238912-293-234152719614857/AnsiballZ_edpm_nftables_from_files.py'
Dec 01 20:37:59 compute-0 sudo[122144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:59 compute-0 python3[122146]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 01 20:37:59 compute-0 sudo[122144]: pam_unix(sudo:session): session closed for user root
Dec 01 20:37:59 compute-0 sudo[122296]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znximcjcojcyzwmphlpxayhxyzmindht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621479.4866931-301-204849646271287/AnsiballZ_stat.py'
Dec 01 20:37:59 compute-0 sudo[122296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:37:59 compute-0 python3.9[122298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:38:00 compute-0 sudo[122296]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:00 compute-0 ceph-mon[75880]: pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:00 compute-0 sudo[122374]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyrggjhprpukmainjwziuegffhilxstk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621479.4866931-301-204849646271287/AnsiballZ_file.py'
Dec 01 20:38:00 compute-0 sudo[122374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:00 compute-0 python3.9[122376]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:00 compute-0 sudo[122374]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:01 compute-0 sudo[122526]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksimydwztlextkwcviwztrfznfanizsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621480.661286-313-105484662011268/AnsiballZ_stat.py'
Dec 01 20:38:01 compute-0 sudo[122526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:01 compute-0 python3.9[122528]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:38:01 compute-0 sudo[122526]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:01 compute-0 sudo[122604]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heltswtdfhrgxehzzhvpiejvsrwxgwjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621480.661286-313-105484662011268/AnsiballZ_file.py'
Dec 01 20:38:01 compute-0 sudo[122604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:01 compute-0 python3.9[122606]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:01 compute-0 sudo[122604]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:02 compute-0 ceph-mon[75880]: pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:02 compute-0 sudo[122756]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iamjfghgphchrgszxnmvwcilxozkkjwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621481.8453946-325-158390118240994/AnsiballZ_stat.py'
Dec 01 20:38:02 compute-0 sudo[122756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:02 compute-0 python3.9[122758]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:38:02 compute-0 sudo[122756]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:02 compute-0 sudo[122834]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvqmpnulvavnbwscgntmppqsqchoggbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621481.8453946-325-158390118240994/AnsiballZ_file.py'
Dec 01 20:38:02 compute-0 sudo[122834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:02 compute-0 python3.9[122836]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:02 compute-0 sudo[122834]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:03 compute-0 ceph-mon[75880]: pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:38:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:38:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:38:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:38:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:38:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:38:03 compute-0 sudo[122986]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jusbzhxryhhmflmgeyinlpabpyeievct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621483.0233026-337-92864687062187/AnsiballZ_stat.py'
Dec 01 20:38:03 compute-0 sudo[122986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:03 compute-0 python3.9[122988]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:38:03 compute-0 sudo[122986]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:03 compute-0 sudo[123064]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktfdhwdixdcsiwubgptvwjlzzhvjcias ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621483.0233026-337-92864687062187/AnsiballZ_file.py'
Dec 01 20:38:03 compute-0 sudo[123064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:03 compute-0 python3.9[123066]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:03 compute-0 sudo[123064]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:04 compute-0 sudo[123216]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksrxfjdlxuzisyexzswhagbzychbxcwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621484.1350894-349-150694449793509/AnsiballZ_stat.py'
Dec 01 20:38:04 compute-0 sudo[123216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:04 compute-0 python3.9[123218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:38:04 compute-0 sudo[123216]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:04 compute-0 sudo[123294]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuimnlzvlgswgjcrzzqxilbxlomxysqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621484.1350894-349-150694449793509/AnsiballZ_file.py'
Dec 01 20:38:04 compute-0 sudo[123294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:05 compute-0 python3.9[123296]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:05 compute-0 sudo[123294]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:05 compute-0 ceph-mon[75880]: pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:05 compute-0 sudo[123446]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajvkmahfetmvyupbzlbnmvnynmatpcmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621485.3582497-362-102966704942651/AnsiballZ_command.py'
Dec 01 20:38:05 compute-0 sudo[123446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:05 compute-0 python3.9[123448]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:38:05 compute-0 sudo[123446]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:06 compute-0 sudo[123601]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iziowptcyycogorytpxxjkuijbcaebpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621486.0937524-370-82345582426287/AnsiballZ_blockinfile.py'
Dec 01 20:38:06 compute-0 sudo[123601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:06 compute-0 python3.9[123603]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:06 compute-0 sudo[123601]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:07 compute-0 sudo[123753]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhdzneotuhqykvfzituwzloskzrmkor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621486.8933282-379-23839637694781/AnsiballZ_file.py'
Dec 01 20:38:07 compute-0 sudo[123753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:07 compute-0 python3.9[123755]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:07 compute-0 sudo[123753]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:07 compute-0 sudo[123905]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ionttmmoklikqomryoxmjhkjqclnwkgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621487.4834843-379-63258391383223/AnsiballZ_file.py'
Dec 01 20:38:07 compute-0 sudo[123905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:07 compute-0 python3.9[123907]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:07 compute-0 sudo[123905]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:07 compute-0 ceph-mon[75880]: pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:08 compute-0 sudo[124057]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spcvjkznhdrxpzlxrnhdgzudcluxjmpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621488.1140769-394-172776325687053/AnsiballZ_mount.py'
Dec 01 20:38:08 compute-0 sudo[124057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:08 compute-0 python3.9[124059]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 01 20:38:08 compute-0 sudo[124057]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:09 compute-0 sudo[124209]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaiwlscmppnzuemtobjquzvruzmndckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621488.9411783-394-241607827424066/AnsiballZ_mount.py'
Dec 01 20:38:09 compute-0 sudo[124209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:09 compute-0 python3.9[124211]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 01 20:38:09 compute-0 sudo[124209]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:09 compute-0 sshd-session[116484]: Connection closed by 192.168.122.30 port 38744
Dec 01 20:38:09 compute-0 sshd-session[116481]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:38:09 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 01 20:38:09 compute-0 systemd[1]: session-41.scope: Consumed 29.305s CPU time.
Dec 01 20:38:09 compute-0 systemd-logind[796]: Session 41 logged out. Waiting for processes to exit.
Dec 01 20:38:09 compute-0 systemd-logind[796]: Removed session 41.
Dec 01 20:38:09 compute-0 ceph-mon[75880]: pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:12 compute-0 ceph-mon[75880]: pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:13 compute-0 ceph-mon[75880]: pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:15 compute-0 sshd-session[124236]: Accepted publickey for zuul from 192.168.122.30 port 34866 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:38:15 compute-0 systemd-logind[796]: New session 42 of user zuul.
Dec 01 20:38:15 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 01 20:38:15 compute-0 sshd-session[124236]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:38:15 compute-0 sshd-session[72066]: Received disconnect from 38.102.83.9 port 39426:11: disconnected by user
Dec 01 20:38:15 compute-0 sshd-session[72066]: Disconnected from user zuul 38.102.83.9 port 39426
Dec 01 20:38:15 compute-0 sshd-session[72063]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:38:15 compute-0 systemd-logind[796]: Session 19 logged out. Waiting for processes to exit.
Dec 01 20:38:15 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 01 20:38:15 compute-0 systemd[1]: session-19.scope: Consumed 1min 30.838s CPU time.
Dec 01 20:38:15 compute-0 systemd-logind[796]: Removed session 19.
Dec 01 20:38:15 compute-0 ceph-mon[75880]: pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:15 compute-0 sudo[124389]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpbmgfqgbrzyekuzrylmnxkavwscqxng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621495.3610353-16-113079026383319/AnsiballZ_tempfile.py'
Dec 01 20:38:15 compute-0 sudo[124389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:15 compute-0 python3.9[124391]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 01 20:38:16 compute-0 sudo[124389]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:16 compute-0 sudo[124541]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbccvdnqzigzrhyvewovfeazocvvogh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621496.1589775-28-244616182503748/AnsiballZ_stat.py'
Dec 01 20:38:16 compute-0 sudo[124541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:16 compute-0 python3.9[124543]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:38:16 compute-0 sudo[124541]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:17 compute-0 sudo[124695]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxmxnqcxqclobmgzuitohialltzkptv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621496.9257257-36-206711438471403/AnsiballZ_slurp.py'
Dec 01 20:38:17 compute-0 sudo[124695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:17 compute-0 python3.9[124697]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 01 20:38:17 compute-0 sudo[124695]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:17 compute-0 sudo[124847]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlkeouooyzssllroukyjsdzlttqidtoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621497.6761467-44-17091984401418/AnsiballZ_stat.py'
Dec 01 20:38:17 compute-0 sudo[124847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:17 compute-0 ceph-mon[75880]: pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:18 compute-0 python3.9[124849]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.e34s1dbi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:38:18 compute-0 sudo[124847]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:18 compute-0 sudo[124972]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdoeyhhjfzcohdklxucuyscmginhmohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621497.6761467-44-17091984401418/AnsiballZ_copy.py'
Dec 01 20:38:18 compute-0 sudo[124972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:18 compute-0 python3.9[124974]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.e34s1dbi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621497.6761467-44-17091984401418/.source.e34s1dbi _original_basename=.7ll02my_ follow=False checksum=80f03e5fc327c61f76cc3096241a9cf87ce76028 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:18 compute-0 sudo[124972]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:19 compute-0 sudo[125124]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hukyovwioqzrhfxdxoyktsbuphyvvowh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621498.8902643-59-274030855229790/AnsiballZ_setup.py'
Dec 01 20:38:19 compute-0 sudo[125124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:19 compute-0 python3.9[125126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:38:19 compute-0 sudo[125124]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:19 compute-0 ceph-mon[75880]: pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:20 compute-0 sudo[125276]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwcuuwqnzyrxvhnrxdxhxcdngnoaqeta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621500.0655708-68-261028051295360/AnsiballZ_blockinfile.py'
Dec 01 20:38:20 compute-0 sudo[125276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:20 compute-0 python3.9[125278]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEtg5PojuPAz6xTrRrBGVFaY4f616wVkbk4JWBnE7XRAZC+o8ulOAcFDcenRNZI+OUaYuJJxrh734s3f9kKVWxpwDg6JPr+yX9ca/za0oEJKz+lyqzwZuFPEQg2i7BL/FFchcrU+rHMr78OnyeUZklBpfu79VWdnJiZ+gX3wZc5No5JHVVB9Tvc7DRGpB6ChOCRA3MsAzrKxI4r4Rrd/nyByUjU4fkCuUkbwd2spVGukPVBGXoayWAnhuUgTrW+lCh3nTtEV8dOTAOjbAZXZHCV2M0dLZxFICAqD36k+PVjSu8qWp2Hvu1g8B5N53Ujzft+1vyg53YnX5lPFXj4ONYa2ODnFte0RleXaYBTC++EGdlxuJ3J3B1FqEjfbN4eDNBRo/Rz7HyjVP1GzAPBuS2dUATfEzqbaQ944c7xPX5X2wz5taiXub+QeteDNVo4qcZiS88HsXzDhljPmecgrp5J82lj1b+gzj0asqIbToGE07if4P4UscX1iXX1EQHLZk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILQi31M0bDjk/+NlS8JOYX6DC83uvFAj0UguhgYwpDOl
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJYnricobpj1rSSYWt5fQI/QAzYdALTS6eg9FvCz6/m5p01CoLr/PbypPcJyrWdb9MCRlc4meQr4pHa+OSei3h4=
                                              create=True mode=0644 path=/tmp/ansible.e34s1dbi state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:20 compute-0 sudo[125276]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:21 compute-0 sudo[125428]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neiboulscgqeryxmtssqvqktafkhiecj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621500.8048553-76-1350407795580/AnsiballZ_command.py'
Dec 01 20:38:21 compute-0 sudo[125428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:21 compute-0 python3.9[125430]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.e34s1dbi' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:38:21 compute-0 sudo[125428]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:22 compute-0 sudo[125582]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqvxxkasaffblyiwvfzzyxgxpmitvscq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621501.6500406-84-56857547054415/AnsiballZ_file.py'
Dec 01 20:38:22 compute-0 sudo[125582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:22 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 01 20:38:22 compute-0 ceph-mon[75880]: pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:22 compute-0 python3.9[125584]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.e34s1dbi state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:22 compute-0 sudo[125582]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:22 compute-0 sshd-session[124239]: Connection closed by 192.168.122.30 port 34866
Dec 01 20:38:22 compute-0 sshd-session[124236]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:38:22 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 01 20:38:22 compute-0 systemd[1]: session-42.scope: Consumed 4.697s CPU time.
Dec 01 20:38:22 compute-0 systemd-logind[796]: Session 42 logged out. Waiting for processes to exit.
Dec 01 20:38:22 compute-0 systemd-logind[796]: Removed session 42.
Dec 01 20:38:24 compute-0 ceph-mon[75880]: pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:25 compute-0 ceph-mon[75880]: pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:27 compute-0 ceph-mon[75880]: pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:28 compute-0 sshd-session[125612]: Accepted publickey for zuul from 192.168.122.30 port 55000 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:38:28 compute-0 systemd-logind[796]: New session 43 of user zuul.
Dec 01 20:38:28 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 01 20:38:28 compute-0 sshd-session[125612]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:38:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:29 compute-0 python3.9[125765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:38:30 compute-0 sudo[125919]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnqwfwrzfxoeoiqvabuaiaawrvhkhbiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621509.4973454-32-215465343798177/AnsiballZ_systemd.py'
Dec 01 20:38:30 compute-0 sudo[125919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:30 compute-0 python3.9[125921]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 01 20:38:30 compute-0 sudo[125919]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:30 compute-0 ceph-mon[75880]: pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:30 compute-0 sudo[126073]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtqwxkhwpwfubnfvlpzghdnpqodkustl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621510.6356332-40-141257174719119/AnsiballZ_systemd.py'
Dec 01 20:38:30 compute-0 sudo[126073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:31 compute-0 python3.9[126075]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:38:31 compute-0 sudo[126073]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:31 compute-0 ceph-mon[75880]: pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:31 compute-0 sudo[126226]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ticweonehaeivbouplehddyfkxcstntf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621511.5107007-49-256029502725661/AnsiballZ_command.py'
Dec 01 20:38:31 compute-0 sudo[126226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:32 compute-0 python3.9[126228]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:38:32 compute-0 sudo[126226]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:38:32
Dec 01 20:38:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:38:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:38:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', 'cephfs.cephfs.data', 'volumes', 'images', '.mgr']
Dec 01 20:38:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:38:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:32 compute-0 sudo[126379]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjskwcraiqekxehujyhepefvsafifdlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621512.266291-57-7506511224540/AnsiballZ_stat.py'
Dec 01 20:38:32 compute-0 sudo[126379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:32 compute-0 python3.9[126381]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:38:32 compute-0 sudo[126379]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:38:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:38:33 compute-0 sudo[126531]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubxttebzodvyxcirqnhyoagytbmcbihq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621512.9727714-66-37062055540515/AnsiballZ_file.py'
Dec 01 20:38:33 compute-0 sudo[126531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:33 compute-0 python3.9[126533]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:38:33 compute-0 sudo[126531]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:33 compute-0 sshd-session[125615]: Connection closed by 192.168.122.30 port 55000
Dec 01 20:38:33 compute-0 sshd-session[125612]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:38:33 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 01 20:38:33 compute-0 systemd[1]: session-43.scope: Consumed 3.607s CPU time.
Dec 01 20:38:33 compute-0 systemd-logind[796]: Session 43 logged out. Waiting for processes to exit.
Dec 01 20:38:33 compute-0 systemd-logind[796]: Removed session 43.
Dec 01 20:38:34 compute-0 ceph-mon[75880]: pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:35 compute-0 ceph-mon[75880]: pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:38 compute-0 ceph-mon[75880]: pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:39 compute-0 sudo[126559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:38:39 compute-0 sudo[126559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:39 compute-0 sudo[126559]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:39 compute-0 sudo[126584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:38:39 compute-0 sudo[126584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:39 compute-0 ceph-mon[75880]: pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:39 compute-0 sudo[126584]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:38:39 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:38:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:38:39 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:38:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:38:39 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:38:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:38:39 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:38:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:38:39 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:38:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:38:39 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:38:39 compute-0 sudo[126639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:38:39 compute-0 sudo[126639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:39 compute-0 sudo[126639]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:39 compute-0 sudo[126664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:38:39 compute-0 sudo[126664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:40 compute-0 podman[126702]: 2025-12-01 20:38:40.019078068 +0000 UTC m=+0.018217072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:38:40 compute-0 podman[126702]: 2025-12-01 20:38:40.263834295 +0000 UTC m=+0.262973309 container create 8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gauss, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:38:40 compute-0 sshd-session[126716]: Accepted publickey for zuul from 192.168.122.30 port 47716 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:38:40 compute-0 systemd-logind[796]: New session 44 of user zuul.
Dec 01 20:38:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:40 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 01 20:38:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:38:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:38:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:38:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:38:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:38:40 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:38:40 compute-0 sshd-session[126716]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:38:40 compute-0 systemd[1]: Started libpod-conmon-8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab.scope.
Dec 01 20:38:40 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:38:41 compute-0 podman[126702]: 2025-12-01 20:38:41.22928751 +0000 UTC m=+1.228426534 container init 8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gauss, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:38:41 compute-0 podman[126702]: 2025-12-01 20:38:41.239549102 +0000 UTC m=+1.238688096 container start 8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:38:41 compute-0 funny_gauss[126728]: 167 167
Dec 01 20:38:41 compute-0 systemd[1]: libpod-8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab.scope: Deactivated successfully.
Dec 01 20:38:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:41 compute-0 python3.9[126888]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:38:42 compute-0 podman[126702]: 2025-12-01 20:38:42.031213957 +0000 UTC m=+2.030352951 container attach 8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gauss, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:38:42 compute-0 podman[126702]: 2025-12-01 20:38:42.032024352 +0000 UTC m=+2.031163336 container died 8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:38:42 compute-0 ceph-mon[75880]: pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-94b64bd331e3cf875f8e283273a6f733a4a33c8dc2998a2e1a6076029a4fefeb-merged.mount: Deactivated successfully.
Dec 01 20:38:42 compute-0 podman[126702]: 2025-12-01 20:38:42.136616698 +0000 UTC m=+2.135755682 container remove 8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:38:42 compute-0 systemd[1]: libpod-conmon-8926232c0ecd27e5104391bb5714cd78c22e2cc042a99a3d35df7f72b61535ab.scope: Deactivated successfully.
Dec 01 20:38:42 compute-0 podman[126979]: 2025-12-01 20:38:42.282571352 +0000 UTC m=+0.022274601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:38:42 compute-0 sudo[127064]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkjrtjakqyfxwkctqghgioyygnvbpeol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621522.1647756-34-665018329336/AnsiballZ_setup.py'
Dec 01 20:38:42 compute-0 sudo[127064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:42 compute-0 podman[126979]: 2025-12-01 20:38:42.518388529 +0000 UTC m=+0.258091758 container create d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:38:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:42 compute-0 systemd[1]: Started libpod-conmon-d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7.scope.
Dec 01 20:38:42 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d43a109fa248acaa50188d7629b90a1f79ec8ede6f745aa958ca1c975f7578e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d43a109fa248acaa50188d7629b90a1f79ec8ede6f745aa958ca1c975f7578e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d43a109fa248acaa50188d7629b90a1f79ec8ede6f745aa958ca1c975f7578e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d43a109fa248acaa50188d7629b90a1f79ec8ede6f745aa958ca1c975f7578e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d43a109fa248acaa50188d7629b90a1f79ec8ede6f745aa958ca1c975f7578e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:42 compute-0 podman[126979]: 2025-12-01 20:38:42.606411303 +0000 UTC m=+0.346114552 container init d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:38:42 compute-0 podman[126979]: 2025-12-01 20:38:42.617584984 +0000 UTC m=+0.357288213 container start d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:38:42 compute-0 podman[126979]: 2025-12-01 20:38:42.649073643 +0000 UTC m=+0.388776912 container attach d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Dec 01 20:38:42 compute-0 python3.9[127066]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:38:43 compute-0 sudo[127064]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:43 compute-0 exciting_chatterjee[127069]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:38:43 compute-0 exciting_chatterjee[127069]: --> All data devices are unavailable
Dec 01 20:38:43 compute-0 systemd[1]: libpod-d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7.scope: Deactivated successfully.
Dec 01 20:38:43 compute-0 podman[126979]: 2025-12-01 20:38:43.104436935 +0000 UTC m=+0.844140184 container died d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 01 20:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d43a109fa248acaa50188d7629b90a1f79ec8ede6f745aa958ca1c975f7578e-merged.mount: Deactivated successfully.
Dec 01 20:38:43 compute-0 podman[126979]: 2025-12-01 20:38:43.200642447 +0000 UTC m=+0.940345676 container remove d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:38:43 compute-0 systemd[1]: libpod-conmon-d64f5f44929957c6917dbf43edf460241ef09f2627249e304ac623ea537fe6e7.scope: Deactivated successfully.
Dec 01 20:38:43 compute-0 sudo[126664]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:43 compute-0 sudo[127109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:38:43 compute-0 sudo[127109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:43 compute-0 sudo[127109]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:43 compute-0 sudo[127157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:38:43 compute-0 sudo[127157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:43 compute-0 sudo[127232]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwxfnbnqqjteczlzcmtmwbebrajacgnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621522.1647756-34-665018329336/AnsiballZ_dnf.py'
Dec 01 20:38:43 compute-0 sudo[127232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:43 compute-0 python3.9[127234]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 01 20:38:43 compute-0 podman[127248]: 2025-12-01 20:38:43.655407781 +0000 UTC m=+0.081377957 container create 10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:38:43 compute-0 podman[127248]: 2025-12-01 20:38:43.597890714 +0000 UTC m=+0.023860930 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:38:44 compute-0 systemd[1]: Started libpod-conmon-10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a.scope.
Dec 01 20:38:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:38:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:44 compute-0 ceph-mon[75880]: pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:44 compute-0 podman[127248]: 2025-12-01 20:38:44.56244849 +0000 UTC m=+0.988418686 container init 10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lederberg, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:38:44 compute-0 podman[127248]: 2025-12-01 20:38:44.570272756 +0000 UTC m=+0.996242942 container start 10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lederberg, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:38:44 compute-0 romantic_lederberg[127267]: 167 167
Dec 01 20:38:44 compute-0 systemd[1]: libpod-10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a.scope: Deactivated successfully.
Dec 01 20:38:45 compute-0 podman[127248]: 2025-12-01 20:38:45.386099709 +0000 UTC m=+1.812069975 container attach 10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lederberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 01 20:38:45 compute-0 podman[127248]: 2025-12-01 20:38:45.387044219 +0000 UTC m=+1.813014455 container died 10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:38:45 compute-0 sudo[127232]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:46 compute-0 python3.9[127433]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:38:46 compute-0 ceph-mon[75880]: pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fc10c841cdc4890f7b09234295a34e22548362eb55c2c58f24a6a1065ab0361-merged.mount: Deactivated successfully.
Dec 01 20:38:47 compute-0 podman[127248]: 2025-12-01 20:38:47.199271159 +0000 UTC m=+3.625241345 container remove 10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:38:47 compute-0 systemd[1]: libpod-conmon-10acb55e1832260951d87c06b91394c0e5e7b65be4cafb00cf987906cc6ce03a.scope: Deactivated successfully.
Dec 01 20:38:47 compute-0 podman[127519]: 2025-12-01 20:38:47.358162689 +0000 UTC m=+0.042995041 container create dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:38:47 compute-0 systemd[1]: Started libpod-conmon-dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3.scope.
Dec 01 20:38:47 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9600941eace0820f2f3eef046ffdcdd05df655f89aa524d405e26293f9b54da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9600941eace0820f2f3eef046ffdcdd05df655f89aa524d405e26293f9b54da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9600941eace0820f2f3eef046ffdcdd05df655f89aa524d405e26293f9b54da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9600941eace0820f2f3eef046ffdcdd05df655f89aa524d405e26293f9b54da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:47 compute-0 podman[127519]: 2025-12-01 20:38:47.336704016 +0000 UTC m=+0.021536388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:38:47 compute-0 podman[127519]: 2025-12-01 20:38:47.559658809 +0000 UTC m=+0.244491161 container init dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:38:47 compute-0 podman[127519]: 2025-12-01 20:38:47.570122907 +0000 UTC m=+0.254955259 container start dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_margulis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:38:47 compute-0 podman[127519]: 2025-12-01 20:38:47.586417719 +0000 UTC m=+0.271250091 container attach dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_margulis, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:38:47 compute-0 python3.9[127613]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 01 20:38:47 compute-0 ceph-mon[75880]: pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:47 compute-0 cranky_margulis[127535]: {
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:     "0": [
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:         {
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "devices": [
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "/dev/loop3"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             ],
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_name": "ceph_lv0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_size": "21470642176",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "name": "ceph_lv0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "tags": {
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cluster_name": "ceph",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.crush_device_class": "",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.encrypted": "0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.objectstore": "bluestore",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osd_id": "0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.type": "block",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.vdo": "0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.with_tpm": "0"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             },
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "type": "block",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "vg_name": "ceph_vg0"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:         }
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:     ],
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:     "1": [
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:         {
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "devices": [
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "/dev/loop4"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             ],
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_name": "ceph_lv1",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_size": "21470642176",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "name": "ceph_lv1",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "tags": {
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cluster_name": "ceph",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.crush_device_class": "",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.encrypted": "0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.objectstore": "bluestore",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osd_id": "1",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.type": "block",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.vdo": "0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.with_tpm": "0"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             },
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "type": "block",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "vg_name": "ceph_vg1"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:         }
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:     ],
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:     "2": [
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:         {
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "devices": [
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "/dev/loop5"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             ],
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_name": "ceph_lv2",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_size": "21470642176",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "name": "ceph_lv2",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "tags": {
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.cluster_name": "ceph",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.crush_device_class": "",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.encrypted": "0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.objectstore": "bluestore",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osd_id": "2",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.type": "block",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.vdo": "0",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:                 "ceph.with_tpm": "0"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             },
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "type": "block",
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:             "vg_name": "ceph_vg2"
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:         }
Dec 01 20:38:47 compute-0 cranky_margulis[127535]:     ]
Dec 01 20:38:47 compute-0 cranky_margulis[127535]: }
Dec 01 20:38:47 compute-0 systemd[1]: libpod-dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3.scope: Deactivated successfully.
Dec 01 20:38:47 compute-0 podman[127519]: 2025-12-01 20:38:47.873666671 +0000 UTC m=+0.558499023 container died dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9600941eace0820f2f3eef046ffdcdd05df655f89aa524d405e26293f9b54da-merged.mount: Deactivated successfully.
Dec 01 20:38:47 compute-0 podman[127519]: 2025-12-01 20:38:47.953915801 +0000 UTC m=+0.638748153 container remove dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:38:47 compute-0 systemd[1]: libpod-conmon-dc850d3377560150cfb7aa30b84b33ed5bb46b3f51db84a8b72eb67a7c514be3.scope: Deactivated successfully.
Dec 01 20:38:47 compute-0 sudo[127157]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:48 compute-0 sudo[127707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:38:48 compute-0 sudo[127707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:48 compute-0 sudo[127707]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:48 compute-0 sudo[127732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:38:48 compute-0 sudo[127732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:48 compute-0 podman[127843]: 2025-12-01 20:38:48.348891337 +0000 UTC m=+0.026756741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:38:48 compute-0 python3.9[127842]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:38:48 compute-0 podman[127843]: 2025-12-01 20:38:48.49258156 +0000 UTC m=+0.170446984 container create 72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:38:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:48 compute-0 systemd[1]: Started libpod-conmon-72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df.scope.
Dec 01 20:38:48 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:38:49 compute-0 python3.9[128011]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:38:49 compute-0 sshd-session[126719]: Connection closed by 192.168.122.30 port 47716
Dec 01 20:38:49 compute-0 sshd-session[126716]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:38:49 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 01 20:38:49 compute-0 systemd[1]: session-44.scope: Consumed 5.733s CPU time.
Dec 01 20:38:49 compute-0 systemd-logind[796]: Session 44 logged out. Waiting for processes to exit.
Dec 01 20:38:49 compute-0 systemd-logind[796]: Removed session 44.
Dec 01 20:38:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:51 compute-0 podman[127843]: 2025-12-01 20:38:51.340855491 +0000 UTC m=+3.018720915 container init 72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:38:51 compute-0 ceph-mon[75880]: pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:51 compute-0 podman[127843]: 2025-12-01 20:38:51.350528875 +0000 UTC m=+3.028394269 container start 72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 01 20:38:51 compute-0 interesting_galileo[127953]: 167 167
Dec 01 20:38:51 compute-0 systemd[1]: libpod-72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df.scope: Deactivated successfully.
Dec 01 20:38:51 compute-0 podman[127843]: 2025-12-01 20:38:51.370261185 +0000 UTC m=+3.048126599 container attach 72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_galileo, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:38:51 compute-0 podman[127843]: 2025-12-01 20:38:51.371122412 +0000 UTC m=+3.048987796 container died 72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_galileo, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 01 20:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-af2d3577e34496763cd94fbb2986922be75cdca469233f538ddfc4ff4efe3399-merged.mount: Deactivated successfully.
Dec 01 20:38:51 compute-0 podman[127843]: 2025-12-01 20:38:51.451794966 +0000 UTC m=+3.129660350 container remove 72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:38:51 compute-0 systemd[1]: libpod-conmon-72faa533aa7553811bba3bb597685da483cffe609a78b2ba7d444067e72115df.scope: Deactivated successfully.
Dec 01 20:38:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:51 compute-0 podman[128059]: 2025-12-01 20:38:51.583496422 +0000 UTC m=+0.023037745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:38:52 compute-0 podman[128059]: 2025-12-01 20:38:52.060758462 +0000 UTC m=+0.500299795 container create 25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:38:52 compute-0 systemd[1]: Started libpod-conmon-25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82.scope.
Dec 01 20:38:52 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e968990ccebf578bfe2fe119452e83c119ad99081bd16c23d0ad49b9ec0413d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e968990ccebf578bfe2fe119452e83c119ad99081bd16c23d0ad49b9ec0413d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e968990ccebf578bfe2fe119452e83c119ad99081bd16c23d0ad49b9ec0413d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e968990ccebf578bfe2fe119452e83c119ad99081bd16c23d0ad49b9ec0413d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:38:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:52 compute-0 podman[128059]: 2025-12-01 20:38:52.686725973 +0000 UTC m=+1.126267356 container init 25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_haslett, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:38:52 compute-0 podman[128059]: 2025-12-01 20:38:52.695904731 +0000 UTC m=+1.135446034 container start 25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:38:52 compute-0 ceph-mon[75880]: pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:52 compute-0 podman[128059]: 2025-12-01 20:38:52.700237557 +0000 UTC m=+1.139778910 container attach 25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:38:53 compute-0 lvm[128154]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:38:53 compute-0 lvm[128154]: VG ceph_vg0 finished
Dec 01 20:38:53 compute-0 lvm[128157]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:38:53 compute-0 lvm[128157]: VG ceph_vg1 finished
Dec 01 20:38:53 compute-0 lvm[128158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:38:53 compute-0 lvm[128158]: VG ceph_vg2 finished
Dec 01 20:38:53 compute-0 interesting_haslett[128076]: {}
Dec 01 20:38:53 compute-0 systemd[1]: libpod-25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82.scope: Deactivated successfully.
Dec 01 20:38:53 compute-0 podman[128059]: 2025-12-01 20:38:53.515394971 +0000 UTC m=+1.954936254 container died 25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:38:53 compute-0 systemd[1]: libpod-25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82.scope: Consumed 1.307s CPU time.
Dec 01 20:38:54 compute-0 ceph-mon[75880]: pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e968990ccebf578bfe2fe119452e83c119ad99081bd16c23d0ad49b9ec0413d-merged.mount: Deactivated successfully.
Dec 01 20:38:54 compute-0 podman[128059]: 2025-12-01 20:38:54.500893659 +0000 UTC m=+2.940434952 container remove 25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:38:54 compute-0 sudo[127732]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:38:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:38:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:38:54 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:38:54 compute-0 systemd[1]: libpod-conmon-25b25b9cddd091ef85a62c6637a0da18c6553ad86b5fc9c43045c023ea1e5b82.scope: Deactivated successfully.
Dec 01 20:38:54 compute-0 sudo[128172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:38:54 compute-0 sudo[128172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:38:54 compute-0 sudo[128172]: pam_unix(sudo:session): session closed for user root
Dec 01 20:38:56 compute-0 ceph-mon[75880]: pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:38:56 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:38:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:38:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:56 compute-0 sshd-session[128197]: Accepted publickey for zuul from 192.168.122.30 port 36250 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:38:56 compute-0 systemd-logind[796]: New session 45 of user zuul.
Dec 01 20:38:56 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 01 20:38:56 compute-0 sshd-session[128197]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:38:58 compute-0 python3.9[128350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:38:58 compute-0 ceph-mon[75880]: pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:59 compute-0 ceph-mon[75880]: pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:38:59 compute-0 sudo[128504]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywquttwupukcwstygfytcoeiaciubkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621539.2147417-50-79316452481156/AnsiballZ_file.py'
Dec 01 20:38:59 compute-0 sudo[128504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:38:59 compute-0 python3.9[128506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:38:59 compute-0 sudo[128504]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:00 compute-0 sudo[128656]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agjtpnasdjfrmujalcbntdfyxxbgbrcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621539.9805-50-157091752953769/AnsiballZ_file.py'
Dec 01 20:39:00 compute-0 sudo[128656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:00 compute-0 python3.9[128658]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:00 compute-0 sudo[128656]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:01 compute-0 sudo[128808]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqnwmgxuobrokteixjvtlardeephdicj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621540.724591-65-124298869598191/AnsiballZ_stat.py'
Dec 01 20:39:01 compute-0 sudo[128808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:01 compute-0 python3.9[128810]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:01 compute-0 sudo[128808]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:01 compute-0 sudo[128931]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibcqfmuekshafwfwmzyinbvbucxnhqwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621540.724591-65-124298869598191/AnsiballZ_copy.py'
Dec 01 20:39:01 compute-0 sudo[128931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:02 compute-0 ceph-mon[75880]: pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:02 compute-0 python3.9[128933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621540.724591-65-124298869598191/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=72da8c59de4822ab3f6132d5136aab10cfa7b535 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:02 compute-0 sudo[128931]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:02 compute-0 sudo[129083]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrragzjtjkdytfxfvkzaocduenkbzkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621542.32129-65-59129056845278/AnsiballZ_stat.py'
Dec 01 20:39:02 compute-0 sudo[129083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:02 compute-0 python3.9[129085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:02 compute-0 sudo[129083]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:03 compute-0 sudo[129206]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcgoxmdrdkbloqjxkrabkzysmbkjnik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621542.32129-65-59129056845278/AnsiballZ_copy.py'
Dec 01 20:39:03 compute-0 sudo[129206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:03 compute-0 python3.9[129208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621542.32129-65-59129056845278/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=42fb36465c1993cabee0fc31c86f613f725c5581 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:39:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:39:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:39:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:39:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:39:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:39:03 compute-0 sudo[129206]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:03 compute-0 sudo[129358]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tutkwseqxmwinmtlqrrsralrnlfmgccf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621543.454451-65-88568301601518/AnsiballZ_stat.py'
Dec 01 20:39:03 compute-0 sudo[129358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:03 compute-0 python3.9[129360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:03 compute-0 sudo[129358]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:04 compute-0 sudo[129481]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmvodebqkqbcqbppqrjocpbfezdghvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621543.454451-65-88568301601518/AnsiballZ_copy.py'
Dec 01 20:39:04 compute-0 sudo[129481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:04 compute-0 python3.9[129483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621543.454451-65-88568301601518/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=de7bd4c4d7e3ebea9f8c793dec15c6dd886aadd7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:04 compute-0 sudo[129481]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:04 compute-0 ceph-mon[75880]: pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:05 compute-0 sudo[129633]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayiueypahheuafafwjjrswgkuuvdizhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621545.0748153-109-41505161987814/AnsiballZ_file.py'
Dec 01 20:39:05 compute-0 sudo[129633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:05 compute-0 python3.9[129635]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:05 compute-0 sudo[129633]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:05 compute-0 ceph-mon[75880]: pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:06 compute-0 sudo[129785]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihclsaeaiinrdxoycwixtpobvinzlgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621545.9010735-109-221597991954393/AnsiballZ_file.py'
Dec 01 20:39:06 compute-0 sudo[129785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:06 compute-0 python3.9[129787]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:06 compute-0 sudo[129785]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:06 compute-0 sudo[129937]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdpccedaguqdksmexjgvniqbiibejhbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621546.5202942-124-107185401823598/AnsiballZ_stat.py'
Dec 01 20:39:06 compute-0 sudo[129937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:06 compute-0 python3.9[129939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:06 compute-0 sudo[129937]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:07 compute-0 sudo[130060]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgeodysgaejzoumszjqvadbvhwpboang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621546.5202942-124-107185401823598/AnsiballZ_copy.py'
Dec 01 20:39:07 compute-0 sudo[130060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:07 compute-0 ceph-mon[75880]: pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:07 compute-0 python3.9[130062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621546.5202942-124-107185401823598/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=26a2b1cf9dfb5ce3c6e250da543090e7a079c606 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:07 compute-0 sudo[130060]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:07 compute-0 sudo[130212]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wclanmlnqhmanhmvnazpiaqepbadczbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621547.62641-124-246373651526137/AnsiballZ_stat.py'
Dec 01 20:39:07 compute-0 sudo[130212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:08 compute-0 python3.9[130214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:08 compute-0 sudo[130212]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:08 compute-0 sudo[130335]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tywwxvietznjtbrqtkkgbwyqlzccjkrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621547.62641-124-246373651526137/AnsiballZ_copy.py'
Dec 01 20:39:08 compute-0 sudo[130335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:08 compute-0 python3.9[130337]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621547.62641-124-246373651526137/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3fbdde1fa93327b494bce41058ef63be15d3defe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:08 compute-0 sudo[130335]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:08 compute-0 sudo[130487]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbtjyoxcmpnbbypznmfkwwfbdwklmkax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621548.690587-124-100468615605571/AnsiballZ_stat.py'
Dec 01 20:39:08 compute-0 sudo[130487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:09 compute-0 python3.9[130489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:09 compute-0 sudo[130487]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:09 compute-0 ceph-mon[75880]: pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:09 compute-0 sudo[130610]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojyfmsvnmblqpcktaxzcfavkjllrcai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621548.690587-124-100468615605571/AnsiballZ_copy.py'
Dec 01 20:39:09 compute-0 sudo[130610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:09 compute-0 python3.9[130612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621548.690587-124-100468615605571/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=64d37b4f340c015d1aa9ad20c45829e4a04f521b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:09 compute-0 sudo[130610]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:10 compute-0 sudo[130762]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goggeccwrweommeuzobnvkmgzlaqttze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621549.9413126-168-48613517359261/AnsiballZ_file.py'
Dec 01 20:39:10 compute-0 sudo[130762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:10 compute-0 python3.9[130764]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:10 compute-0 sudo[130762]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:10 compute-0 sudo[130914]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqwnveogrbynmtnjhnnjwfsiyoytzgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621550.4819477-168-8439473784255/AnsiballZ_file.py'
Dec 01 20:39:10 compute-0 sudo[130914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:10 compute-0 python3.9[130916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:10 compute-0 sudo[130914]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:11 compute-0 sudo[131066]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyntsanwvsiwpnpjkjeptamrbesearxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621551.14693-183-147434409374340/AnsiballZ_stat.py'
Dec 01 20:39:11 compute-0 sudo[131066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:11 compute-0 python3.9[131068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:11 compute-0 sudo[131066]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:11 compute-0 sudo[131189]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsjhhvaonycttxeyngnoneicybnqahie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621551.14693-183-147434409374340/AnsiballZ_copy.py'
Dec 01 20:39:11 compute-0 sudo[131189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:12 compute-0 python3.9[131191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621551.14693-183-147434409374340/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=501172e79ae1359de81166f9a3430aad88647c94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:12 compute-0 sudo[131189]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:12 compute-0 ceph-mon[75880]: pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.159499) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621552159623, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6653, "num_deletes": 251, "total_data_size": 7705810, "memory_usage": 7841288, "flush_reason": "Manual Compaction"}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621552238860, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5739587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 6796, "table_properties": {"data_size": 5716055, "index_size": 15290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7173, "raw_key_size": 63171, "raw_average_key_size": 22, "raw_value_size": 5661604, "raw_average_value_size": 1986, "num_data_blocks": 685, "num_entries": 2850, "num_filter_entries": 2850, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621074, "oldest_key_time": 1764621074, "file_creation_time": 1764621552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 79421 microseconds, and 12367 cpu microseconds.
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.238930) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5739587 bytes OK
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.238952) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.240431) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.240448) EVENT_LOG_v1 {"time_micros": 1764621552240443, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.240476) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 7677672, prev total WAL file size 7677672, number of live WAL files 2.
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.242011) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5605KB) 13(58KB) 8(1944B)]
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621552242138, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5801475, "oldest_snapshot_seqno": -1}
Dec 01 20:39:12 compute-0 sudo[131342]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqgsinvncmkefwxwrokbuqwgzyzxomiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621552.142646-183-11855807238145/AnsiballZ_stat.py'
Dec 01 20:39:12 compute-0 sudo[131342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2676 keys, 5754465 bytes, temperature: kUnknown
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621552393533, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5754465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5731275, "index_size": 15401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6725, "raw_key_size": 61611, "raw_average_key_size": 23, "raw_value_size": 5678116, "raw_average_value_size": 2121, "num_data_blocks": 690, "num_entries": 2676, "num_filter_entries": 2676, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764621552, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.393813) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5754465 bytes
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.470510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.3 rd, 38.0 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.5, 0.0 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2965, records dropped: 289 output_compression: NoCompression
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.470552) EVENT_LOG_v1 {"time_micros": 1764621552470536, "job": 4, "event": "compaction_finished", "compaction_time_micros": 151502, "compaction_time_cpu_micros": 13242, "output_level": 6, "num_output_files": 1, "total_output_size": 5754465, "num_input_records": 2965, "num_output_records": 2676, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621552472151, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621552472324, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621552472415, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 01 20:39:12 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:39:12.241880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:39:12 compute-0 python3.9[131344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:12 compute-0 sudo[131342]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:12 compute-0 sudo[131465]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvtalyokwamvkpbvimiqiutrhogujeto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621552.142646-183-11855807238145/AnsiballZ_copy.py'
Dec 01 20:39:12 compute-0 sudo[131465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:13 compute-0 python3.9[131467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621552.142646-183-11855807238145/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3fbdde1fa93327b494bce41058ef63be15d3defe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:13 compute-0 sudo[131465]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:13 compute-0 sudo[131617]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuylegddxzxdecmbffgdhvcofkwillcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621553.2563632-183-37065552624516/AnsiballZ_stat.py'
Dec 01 20:39:13 compute-0 sudo[131617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:13 compute-0 ceph-mon[75880]: pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:13 compute-0 python3.9[131619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:13 compute-0 sudo[131617]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:14 compute-0 sudo[131740]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtjqvrvsydpmertpvfxhdpfaegyvqlvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621553.2563632-183-37065552624516/AnsiballZ_copy.py'
Dec 01 20:39:14 compute-0 sudo[131740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:14 compute-0 python3.9[131742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621553.2563632-183-37065552624516/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=db4388feb726581bd3cff2eab72336ee6b34c43f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:14 compute-0 sudo[131740]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:15 compute-0 sudo[131892]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcoxlzfhmrrwqimbzyppjncnyjekpxlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621554.9547884-243-269171565460818/AnsiballZ_file.py'
Dec 01 20:39:15 compute-0 sudo[131892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:15 compute-0 python3.9[131894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:15 compute-0 sudo[131892]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:15 compute-0 sudo[132044]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwjlupkyuzresbpcceuwfmzbnkizacj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621555.6337306-251-123012737124448/AnsiballZ_stat.py'
Dec 01 20:39:15 compute-0 sudo[132044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:15 compute-0 ceph-mon[75880]: pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:16 compute-0 python3.9[132046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:16 compute-0 sudo[132044]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:16 compute-0 sudo[132167]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yonngvuqnzfufvbuvquxfayrohdwuikf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621555.6337306-251-123012737124448/AnsiballZ_copy.py'
Dec 01 20:39:16 compute-0 sudo[132167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:16 compute-0 python3.9[132169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621555.6337306-251-123012737124448/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3df024ac0733db6e5f9a52fcc7729417ba9442f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:16 compute-0 sudo[132167]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:17 compute-0 sudo[132319]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyfpuuujemwyrgeiytpypwxauzcwfinu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621556.8324094-267-217752782500635/AnsiballZ_file.py'
Dec 01 20:39:17 compute-0 sudo[132319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:17 compute-0 python3.9[132321]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:17 compute-0 sudo[132319]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:17 compute-0 sudo[132471]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zniqwhliamzmuohnlrgitxemntonbncf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621557.3955662-275-38017227887934/AnsiballZ_stat.py'
Dec 01 20:39:17 compute-0 sudo[132471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:17 compute-0 python3.9[132473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:17 compute-0 sudo[132471]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:17 compute-0 ceph-mon[75880]: pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:18 compute-0 sudo[132594]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqgzdrsyjubtpfklccvgmxwtxwhxslky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621557.3955662-275-38017227887934/AnsiballZ_copy.py'
Dec 01 20:39:18 compute-0 sudo[132594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:18 compute-0 python3.9[132596]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621557.3955662-275-38017227887934/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3df024ac0733db6e5f9a52fcc7729417ba9442f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:18 compute-0 sudo[132594]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:18 compute-0 sudo[132746]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gezxeyiswfmaslticrsiscsnilkhifju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621558.4903836-291-139701190352175/AnsiballZ_file.py'
Dec 01 20:39:18 compute-0 sudo[132746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:18 compute-0 python3.9[132748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:18 compute-0 sudo[132746]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:19 compute-0 sudo[132898]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswqztgokstjacsmdvgwjezxnfaynicp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621559.055712-299-146560489586576/AnsiballZ_stat.py'
Dec 01 20:39:19 compute-0 sudo[132898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:19 compute-0 ceph-mon[75880]: pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:19 compute-0 python3.9[132900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:19 compute-0 sudo[132898]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:19 compute-0 sudo[133021]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pojzyugqwyatyzxqcqjuykzwwixjmiyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621559.055712-299-146560489586576/AnsiballZ_copy.py'
Dec 01 20:39:19 compute-0 sudo[133021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:20 compute-0 python3.9[133023]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621559.055712-299-146560489586576/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3df024ac0733db6e5f9a52fcc7729417ba9442f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:20 compute-0 sudo[133021]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:20 compute-0 sudo[133173]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyyqwcduhdxhghmpxvvvxiyotnenhilq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621560.367769-315-45889954893302/AnsiballZ_file.py'
Dec 01 20:39:20 compute-0 sudo[133173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:20 compute-0 python3.9[133175]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:20 compute-0 sudo[133173]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:21 compute-0 sudo[133325]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxchytiergnljtbvfltwcemfhnuhmvqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621561.0102823-323-250581445426007/AnsiballZ_stat.py'
Dec 01 20:39:21 compute-0 sudo[133325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:21 compute-0 python3.9[133327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:21 compute-0 sudo[133325]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:21 compute-0 ceph-mon[75880]: pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:21 compute-0 sudo[133448]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fblfndtfzcsfullnbdzsgjgwdimvftky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621561.0102823-323-250581445426007/AnsiballZ_copy.py'
Dec 01 20:39:21 compute-0 sudo[133448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:21 compute-0 python3.9[133450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621561.0102823-323-250581445426007/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3df024ac0733db6e5f9a52fcc7729417ba9442f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:21 compute-0 sudo[133448]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:22 compute-0 sudo[133600]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqsgvuyskcuweqwgvzvakrvoibesvhwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621562.125834-339-38977014220790/AnsiballZ_file.py'
Dec 01 20:39:22 compute-0 sudo[133600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:22 compute-0 python3.9[133602]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:22 compute-0 sudo[133600]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:23 compute-0 sudo[133752]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuqolxhmuywgafoaomtatapudjntnweq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621562.8198097-347-52867955815172/AnsiballZ_stat.py'
Dec 01 20:39:23 compute-0 sudo[133752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:23 compute-0 python3.9[133754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:23 compute-0 sudo[133752]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:23 compute-0 sudo[133875]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeaimpmoabnbdwbomaoimllgtvmnpqlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621562.8198097-347-52867955815172/AnsiballZ_copy.py'
Dec 01 20:39:23 compute-0 sudo[133875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:23 compute-0 python3.9[133877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621562.8198097-347-52867955815172/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3df024ac0733db6e5f9a52fcc7729417ba9442f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:23 compute-0 sudo[133875]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:23 compute-0 ceph-mon[75880]: pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:24 compute-0 sudo[134027]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecnmtzajizgzvpztbkxihuozpqdszsyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621563.9639914-363-201428951900560/AnsiballZ_file.py'
Dec 01 20:39:24 compute-0 sudo[134027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:24 compute-0 python3.9[134029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:24 compute-0 sudo[134027]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:24 compute-0 sudo[134179]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzqkkofudofkmkjbfzratdppwtplchsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621564.5630796-371-30119097618475/AnsiballZ_stat.py'
Dec 01 20:39:24 compute-0 sudo[134179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:25 compute-0 python3.9[134181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:25 compute-0 sudo[134179]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:25 compute-0 sudo[134302]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yktehxjwueleqehbiddijrsrmobsheat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621564.5630796-371-30119097618475/AnsiballZ_copy.py'
Dec 01 20:39:25 compute-0 sudo[134302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:25 compute-0 python3.9[134304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621564.5630796-371-30119097618475/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3df024ac0733db6e5f9a52fcc7729417ba9442f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:25 compute-0 sudo[134302]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:25 compute-0 sshd-session[128200]: Connection closed by 192.168.122.30 port 36250
Dec 01 20:39:25 compute-0 sshd-session[128197]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:39:25 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 01 20:39:25 compute-0 systemd[1]: session-45.scope: Consumed 22.089s CPU time.
Dec 01 20:39:25 compute-0 systemd-logind[796]: Session 45 logged out. Waiting for processes to exit.
Dec 01 20:39:25 compute-0 systemd-logind[796]: Removed session 45.
Dec 01 20:39:25 compute-0 ceph-mon[75880]: pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:27 compute-0 ceph-mon[75880]: pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:30 compute-0 ceph-mon[75880]: pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:31 compute-0 ceph-mon[75880]: pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:31 compute-0 sshd-session[134329]: Accepted publickey for zuul from 192.168.122.30 port 58644 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:39:31 compute-0 systemd-logind[796]: New session 46 of user zuul.
Dec 01 20:39:31 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 01 20:39:31 compute-0 sshd-session[134329]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:39:32 compute-0 sudo[134482]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxkriclmdpznpfendzivpkrwznrpnlnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621571.737298-22-126420525866581/AnsiballZ_file.py'
Dec 01 20:39:32 compute-0 sudo[134482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:32 compute-0 python3.9[134484]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:32 compute-0 sudo[134482]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:39:32
Dec 01 20:39:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:39:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:39:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'cephfs.cephfs.data', 'vms']
Dec 01 20:39:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:39:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:33 compute-0 sudo[134634]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bypghxuwjvipxidbcnjdngarbvgfduwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621572.568955-34-241561634031368/AnsiballZ_stat.py'
Dec 01 20:39:33 compute-0 sudo[134634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:33 compute-0 python3.9[134636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:33 compute-0 sudo[134634]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:39:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:39:33 compute-0 ceph-mon[75880]: pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:33 compute-0 sudo[134757]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqhzmbluvbaahvvisvjucdmvjjjlsvei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621572.568955-34-241561634031368/AnsiballZ_copy.py'
Dec 01 20:39:33 compute-0 sudo[134757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:33 compute-0 python3.9[134759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621572.568955-34-241561634031368/.source.conf _original_basename=ceph.conf follow=False checksum=363a09c874695b48aaa3b08fc62ded62ac8e42db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:33 compute-0 sudo[134757]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:34 compute-0 sudo[134909]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvvkqhtqazwamykqdleuzvbacfefhxwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621574.0305576-34-208188560667757/AnsiballZ_stat.py'
Dec 01 20:39:34 compute-0 sudo[134909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:34 compute-0 python3.9[134911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:34 compute-0 sudo[134909]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:34 compute-0 sudo[135032]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atuayzdysjshpugfrhisxrikbrmsbvqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621574.0305576-34-208188560667757/AnsiballZ_copy.py'
Dec 01 20:39:34 compute-0 sudo[135032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:34 compute-0 python3.9[135034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621574.0305576-34-208188560667757/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=2727cc641a79df9d39e1e523028429a32d1fa66b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:34 compute-0 sudo[135032]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:35 compute-0 sshd-session[134332]: Connection closed by 192.168.122.30 port 58644
Dec 01 20:39:35 compute-0 sshd-session[134329]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:39:35 compute-0 systemd-logind[796]: Session 46 logged out. Waiting for processes to exit.
Dec 01 20:39:35 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 01 20:39:35 compute-0 systemd[1]: session-46.scope: Consumed 2.534s CPU time.
Dec 01 20:39:35 compute-0 systemd-logind[796]: Removed session 46.
Dec 01 20:39:36 compute-0 ceph-mon[75880]: pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:37 compute-0 ceph-mon[75880]: pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:40 compute-0 ceph-mon[75880]: pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:39:40 compute-0 sshd-session[135059]: Accepted publickey for zuul from 192.168.122.30 port 59558 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:39:40 compute-0 systemd-logind[796]: New session 47 of user zuul.
Dec 01 20:39:40 compute-0 systemd[1]: Started Session 47 of User zuul.
Dec 01 20:39:40 compute-0 sshd-session[135059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:39:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:41 compute-0 ceph-mon[75880]: pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:41 compute-0 python3.9[135212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:39:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:42 compute-0 sudo[135366]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomwngrjyjqxajmqcitkjakparwzohlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621581.9729044-34-242012924820195/AnsiballZ_file.py'
Dec 01 20:39:42 compute-0 sudo[135366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:42 compute-0 python3.9[135368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:42 compute-0 sudo[135366]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:42 compute-0 sudo[135518]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihlkddkqglabqezbvxdtqbeoefwypmku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621582.6587806-34-200489127739736/AnsiballZ_file.py'
Dec 01 20:39:42 compute-0 sudo[135518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:43 compute-0 python3.9[135520]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:39:43 compute-0 sudo[135518]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:43 compute-0 python3.9[135670]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:39:43 compute-0 ceph-mon[75880]: pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:44 compute-0 sudo[135820]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfefuxbsewugfwtcgfbguhachioqntxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621583.9567676-57-158709043357588/AnsiballZ_seboolean.py'
Dec 01 20:39:44 compute-0 sudo[135820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:44 compute-0 python3.9[135822]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 01 20:39:45 compute-0 ceph-mon[75880]: pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:46 compute-0 sudo[135820]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:46 compute-0 sudo[135976]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksyjiunkcdipybomihpeqqjgshpmmbkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621586.2776852-67-215966993813132/AnsiballZ_setup.py'
Dec 01 20:39:46 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 01 20:39:46 compute-0 sudo[135976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:46 compute-0 python3.9[135978]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:39:47 compute-0 sudo[135976]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:47 compute-0 sudo[136060]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycugcmyeueyprsvrxxingzejmfqizbsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621586.2776852-67-215966993813132/AnsiballZ_dnf.py'
Dec 01 20:39:47 compute-0 sudo[136060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:47 compute-0 python3.9[136062]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:39:48 compute-0 ceph-mon[75880]: pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:49 compute-0 sudo[136060]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:49 compute-0 sudo[136213]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfsmkirsoobjudetjjdhmhayqybozbzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621589.2161455-79-147728596568039/AnsiballZ_systemd.py'
Dec 01 20:39:49 compute-0 sudo[136213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:50 compute-0 python3.9[136215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:39:50 compute-0 ceph-mon[75880]: pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:50 compute-0 sudo[136213]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:50 compute-0 sudo[136368]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjttavsjfmmzjfddcdxsettigsphuup ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621590.3706353-87-113520751450527/AnsiballZ_edpm_nftables_snippet.py'
Dec 01 20:39:50 compute-0 sudo[136368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:50 compute-0 python3[136370]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 01 20:39:50 compute-0 sudo[136368]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:51 compute-0 sudo[136520]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpymozkfeoropdyfzbenqhjfxierunqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621591.1861172-96-107594448853290/AnsiballZ_file.py'
Dec 01 20:39:51 compute-0 sudo[136520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:51 compute-0 python3.9[136522]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:51 compute-0 sudo[136520]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:52 compute-0 sudo[136672]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgiagnkgmiiosveexfhojnnmkmstoefw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621591.7554972-104-56440957215225/AnsiballZ_stat.py'
Dec 01 20:39:52 compute-0 sudo[136672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:52 compute-0 ceph-mon[75880]: pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:52 compute-0 python3.9[136674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:52 compute-0 sudo[136672]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:52 compute-0 sudo[136750]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nysknlxcafxjyojyrlkhsbxpwssaciux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621591.7554972-104-56440957215225/AnsiballZ_file.py'
Dec 01 20:39:52 compute-0 sudo[136750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:52 compute-0 python3.9[136752]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:52 compute-0 sudo[136750]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:53 compute-0 sudo[136902]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpwvchneakemjnblaiihhtuowdwbadsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621592.874472-116-122648080200324/AnsiballZ_stat.py'
Dec 01 20:39:53 compute-0 sudo[136902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:53 compute-0 python3.9[136904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:53 compute-0 sudo[136902]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:53 compute-0 sudo[136980]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fteeajlhnjocfmebtvxddlnmdfkvvirg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621592.874472-116-122648080200324/AnsiballZ_file.py'
Dec 01 20:39:53 compute-0 sudo[136980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:53 compute-0 python3.9[136982]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ghr18tla recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:53 compute-0 sudo[136980]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:54 compute-0 sudo[137132]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twizpjkgyhniskoauwwuxtpddheldnjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621593.8608727-128-28488852436277/AnsiballZ_stat.py'
Dec 01 20:39:54 compute-0 sudo[137132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:54 compute-0 ceph-mon[75880]: pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:54 compute-0 python3.9[137134]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:54 compute-0 sudo[137132]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:54 compute-0 sudo[137210]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quswbynbmkwvhkqohmgbjdbvwzljhjpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621593.8608727-128-28488852436277/AnsiballZ_file.py'
Dec 01 20:39:54 compute-0 sudo[137210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:54 compute-0 sudo[137213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:39:54 compute-0 sudo[137213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:54 compute-0 sudo[137213]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:54 compute-0 python3.9[137212]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:54 compute-0 sudo[137210]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:54 compute-0 sudo[137238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:39:54 compute-0 sudo[137238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:55 compute-0 sudo[137238]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:39:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:39:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:39:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:39:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:39:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:39:55 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:39:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:39:55 compute-0 sudo[137416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:39:55 compute-0 sudo[137416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:55 compute-0 sudo[137416]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:55 compute-0 sudo[137466]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyogdtfuswgqkuqdcwgzotsfuhyaqetq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621594.911908-141-52690602897257/AnsiballZ_command.py'
Dec 01 20:39:55 compute-0 sudo[137466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:55 compute-0 ceph-mon[75880]: pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:39:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:39:55 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:39:55 compute-0 sudo[137469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:39:55 compute-0 sudo[137469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:55 compute-0 python3.9[137472]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:39:55 compute-0 sudo[137466]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:55 compute-0 podman[137507]: 2025-12-01 20:39:55.623575746 +0000 UTC m=+0.038873886 container create e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 01 20:39:55 compute-0 systemd[1]: Started libpod-conmon-e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e.scope.
Dec 01 20:39:55 compute-0 podman[137507]: 2025-12-01 20:39:55.604793094 +0000 UTC m=+0.020091234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:39:55 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:39:55 compute-0 podman[137507]: 2025-12-01 20:39:55.793134176 +0000 UTC m=+0.208432326 container init e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:39:55 compute-0 podman[137507]: 2025-12-01 20:39:55.800772071 +0000 UTC m=+0.216070201 container start e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:39:55 compute-0 podman[137507]: 2025-12-01 20:39:55.804367146 +0000 UTC m=+0.219665276 container attach e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:39:55 compute-0 intelligent_edison[137549]: 167 167
Dec 01 20:39:55 compute-0 systemd[1]: libpod-e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e.scope: Deactivated successfully.
Dec 01 20:39:55 compute-0 podman[137507]: 2025-12-01 20:39:55.806832485 +0000 UTC m=+0.222130615 container died e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e2651e26db41d1175bdb2fe3db4a2987d638ce25d9e55a222b6830fc7711b07-merged.mount: Deactivated successfully.
Dec 01 20:39:55 compute-0 podman[137507]: 2025-12-01 20:39:55.854172801 +0000 UTC m=+0.269470931 container remove e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:39:55 compute-0 systemd[1]: libpod-conmon-e1bec7f54a20f7a5af8784e34e4b2614f62cd80a0a06abf9e0d45850331b226e.scope: Deactivated successfully.
Dec 01 20:39:56 compute-0 podman[137627]: 2025-12-01 20:39:56.048586498 +0000 UTC m=+0.080096087 container create 3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 01 20:39:56 compute-0 systemd[1]: Started libpod-conmon-3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0.scope.
Dec 01 20:39:56 compute-0 podman[137627]: 2025-12-01 20:39:55.998868826 +0000 UTC m=+0.030378435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:39:56 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f3bcbfa625b3b501098b5d08d4a7721b2fa100b25c6fbcc5463cbf43783b66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f3bcbfa625b3b501098b5d08d4a7721b2fa100b25c6fbcc5463cbf43783b66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f3bcbfa625b3b501098b5d08d4a7721b2fa100b25c6fbcc5463cbf43783b66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f3bcbfa625b3b501098b5d08d4a7721b2fa100b25c6fbcc5463cbf43783b66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f3bcbfa625b3b501098b5d08d4a7721b2fa100b25c6fbcc5463cbf43783b66/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:56 compute-0 sudo[137719]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiesioohydkrqssejppbejavsmopwtqc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621595.745591-149-179150859369253/AnsiballZ_edpm_nftables_from_files.py'
Dec 01 20:39:56 compute-0 sudo[137719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:56 compute-0 python3[137721]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 01 20:39:56 compute-0 sudo[137719]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:56 compute-0 podman[137627]: 2025-12-01 20:39:56.407578106 +0000 UTC m=+0.439087715 container init 3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:39:56 compute-0 podman[137627]: 2025-12-01 20:39:56.415031185 +0000 UTC m=+0.446540774 container start 3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mayer, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Dec 01 20:39:56 compute-0 podman[137627]: 2025-12-01 20:39:56.480617826 +0000 UTC m=+0.512127415 container attach 3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 01 20:39:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:39:56 compute-0 sudo[137880]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trwonjdhpcjjpqiafzmtvnzcxapndytm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621596.5137563-157-7141116043060/AnsiballZ_stat.py'
Dec 01 20:39:56 compute-0 sudo[137880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:56 compute-0 stupefied_mayer[137690]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:39:56 compute-0 stupefied_mayer[137690]: --> All data devices are unavailable
Dec 01 20:39:56 compute-0 systemd[1]: libpod-3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0.scope: Deactivated successfully.
Dec 01 20:39:56 compute-0 podman[137627]: 2025-12-01 20:39:56.92473777 +0000 UTC m=+0.956247369 container died 3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 01 20:39:56 compute-0 python3.9[137882]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7f3bcbfa625b3b501098b5d08d4a7721b2fa100b25c6fbcc5463cbf43783b66-merged.mount: Deactivated successfully.
Dec 01 20:39:56 compute-0 sudo[137880]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:57 compute-0 podman[137627]: 2025-12-01 20:39:57.129728815 +0000 UTC m=+1.161238404 container remove 3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mayer, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:39:57 compute-0 systemd[1]: libpod-conmon-3b90be3f56e08826ec474a2c6eb451fcef9d1d95bcf34b6eb3f877c6660c92e0.scope: Deactivated successfully.
Dec 01 20:39:57 compute-0 sudo[137469]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:57 compute-0 sudo[137952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:39:57 compute-0 sudo[137952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:57 compute-0 sudo[137952]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:57 compute-0 sudo[137977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:39:57 compute-0 sudo[137977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:57 compute-0 sudo[138075]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzftsgouqzfhsflvkxxqrzpuywzkegjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621596.5137563-157-7141116043060/AnsiballZ_copy.py'
Dec 01 20:39:57 compute-0 sudo[138075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:57 compute-0 python3.9[138077]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621596.5137563-157-7141116043060/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:57 compute-0 podman[138090]: 2025-12-01 20:39:57.619005646 +0000 UTC m=+0.082536245 container create aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:39:57 compute-0 sudo[138075]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:57 compute-0 podman[138090]: 2025-12-01 20:39:57.560384248 +0000 UTC m=+0.023914877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:39:57 compute-0 systemd[1]: Started libpod-conmon-aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766.scope.
Dec 01 20:39:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:39:57 compute-0 podman[138090]: 2025-12-01 20:39:57.823464334 +0000 UTC m=+0.286994953 container init aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 01 20:39:57 compute-0 podman[138090]: 2025-12-01 20:39:57.830137728 +0000 UTC m=+0.293668327 container start aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:39:57 compute-0 inspiring_ptolemy[138130]: 167 167
Dec 01 20:39:57 compute-0 systemd[1]: libpod-aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766.scope: Deactivated successfully.
Dec 01 20:39:57 compute-0 podman[138090]: 2025-12-01 20:39:57.914063526 +0000 UTC m=+0.377594125 container attach aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ptolemy, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:39:57 compute-0 podman[138090]: 2025-12-01 20:39:57.914608853 +0000 UTC m=+0.378139452 container died aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:39:57 compute-0 ceph-mon[75880]: pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ee070cae3eeef118f9bf35272a626a7145716316c9d9bafe697f96f21055cea-merged.mount: Deactivated successfully.
Dec 01 20:39:58 compute-0 sudo[138272]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjawjottgxgkrjqqqopxvmunetyxrszx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621597.7628367-172-128821736863941/AnsiballZ_stat.py'
Dec 01 20:39:58 compute-0 sudo[138272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:58 compute-0 podman[138090]: 2025-12-01 20:39:58.074511544 +0000 UTC m=+0.538042143 container remove aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:39:58 compute-0 systemd[1]: libpod-conmon-aa2c8a8cc24e6592e371b7cb465f8083ec7f845be2759ba336646bfe01046766.scope: Deactivated successfully.
Dec 01 20:39:58 compute-0 python3.9[138274]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:58 compute-0 sudo[138272]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:58 compute-0 podman[138282]: 2025-12-01 20:39:58.203825436 +0000 UTC m=+0.025183608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:39:58 compute-0 podman[138282]: 2025-12-01 20:39:58.32354019 +0000 UTC m=+0.144898362 container create 2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jones, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:39:58 compute-0 systemd[1]: Started libpod-conmon-2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863.scope.
Dec 01 20:39:58 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51f0238c6cde0d84ca23f50002a4fe0092e8e9ccf9f42f03436ce3246e99acd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51f0238c6cde0d84ca23f50002a4fe0092e8e9ccf9f42f03436ce3246e99acd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51f0238c6cde0d84ca23f50002a4fe0092e8e9ccf9f42f03436ce3246e99acd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51f0238c6cde0d84ca23f50002a4fe0092e8e9ccf9f42f03436ce3246e99acd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:39:58 compute-0 podman[138282]: 2025-12-01 20:39:58.427839321 +0000 UTC m=+0.249197483 container init 2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:39:58 compute-0 podman[138282]: 2025-12-01 20:39:58.437412268 +0000 UTC m=+0.258770420 container start 2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:39:58 compute-0 podman[138282]: 2025-12-01 20:39:58.440985021 +0000 UTC m=+0.262343173 container attach 2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jones, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:39:58 compute-0 sudo[138426]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zscviwbznumsxhexsawxlknxmjbshktk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621597.7628367-172-128821736863941/AnsiballZ_copy.py'
Dec 01 20:39:58 compute-0 sudo[138426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:58 compute-0 python3.9[138428]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621597.7628367-172-128821736863941/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:39:58 compute-0 quizzical_jones[138348]: {
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:     "0": [
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:         {
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "devices": [
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "/dev/loop3"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             ],
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_name": "ceph_lv0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_size": "21470642176",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "name": "ceph_lv0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "tags": {
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cluster_name": "ceph",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.crush_device_class": "",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.encrypted": "0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.objectstore": "bluestore",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osd_id": "0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.type": "block",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.vdo": "0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.with_tpm": "0"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             },
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "type": "block",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "vg_name": "ceph_vg0"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:         }
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:     ],
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:     "1": [
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:         {
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "devices": [
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "/dev/loop4"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             ],
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_name": "ceph_lv1",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_size": "21470642176",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "name": "ceph_lv1",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "tags": {
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cluster_name": "ceph",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.crush_device_class": "",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.encrypted": "0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.objectstore": "bluestore",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osd_id": "1",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.type": "block",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.vdo": "0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.with_tpm": "0"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             },
Dec 01 20:39:58 compute-0 sudo[138426]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "type": "block",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "vg_name": "ceph_vg1"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:         }
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:     ],
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:     "2": [
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:         {
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "devices": [
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "/dev/loop5"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             ],
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_name": "ceph_lv2",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_size": "21470642176",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "name": "ceph_lv2",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "tags": {
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.cluster_name": "ceph",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.crush_device_class": "",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.encrypted": "0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.objectstore": "bluestore",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osd_id": "2",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.type": "block",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.vdo": "0",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:                 "ceph.with_tpm": "0"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             },
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "type": "block",
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:             "vg_name": "ceph_vg2"
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:         }
Dec 01 20:39:58 compute-0 quizzical_jones[138348]:     ]
Dec 01 20:39:58 compute-0 quizzical_jones[138348]: }
Dec 01 20:39:58 compute-0 systemd[1]: libpod-2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863.scope: Deactivated successfully.
Dec 01 20:39:58 compute-0 podman[138282]: 2025-12-01 20:39:58.781277971 +0000 UTC m=+0.602636123 container died 2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jones, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:39:59 compute-0 sudo[138594]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnfafjqzqcasrnfbwkjglqzhunbfgfnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621598.9405386-187-257580475948499/AnsiballZ_stat.py'
Dec 01 20:39:59 compute-0 sudo[138594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:59 compute-0 python3.9[138596]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:39:59 compute-0 sudo[138594]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f51f0238c6cde0d84ca23f50002a4fe0092e8e9ccf9f42f03436ce3246e99acd-merged.mount: Deactivated successfully.
Dec 01 20:39:59 compute-0 ceph-mon[75880]: pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:39:59 compute-0 podman[138282]: 2025-12-01 20:39:59.680120161 +0000 UTC m=+1.501478313 container remove 2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jones, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:39:59 compute-0 systemd[1]: libpod-conmon-2b88d36896bdfcd06f04ea190d8b99f406a5d9a8342fc2a7cf7c7f0cbb37f863.scope: Deactivated successfully.
Dec 01 20:39:59 compute-0 sudo[137977]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:59 compute-0 sudo[138695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:39:59 compute-0 sudo[138695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:59 compute-0 sudo[138695]: pam_unix(sudo:session): session closed for user root
Dec 01 20:39:59 compute-0 sudo[138744]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhlbawmsxvmiiuqiwraevcyhxidvbkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621598.9405386-187-257580475948499/AnsiballZ_copy.py'
Dec 01 20:39:59 compute-0 sudo[138744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:39:59 compute-0 sudo[138748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:39:59 compute-0 sudo[138748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:39:59 compute-0 python3.9[138749]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621598.9405386-187-257580475948499/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:00 compute-0 sudo[138744]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:00 compute-0 podman[138808]: 2025-12-01 20:40:00.131857052 +0000 UTC m=+0.041710118 container create 8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 01 20:40:00 compute-0 systemd[1]: Started libpod-conmon-8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e.scope.
Dec 01 20:40:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:40:00 compute-0 podman[138808]: 2025-12-01 20:40:00.113424076 +0000 UTC m=+0.023277202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:40:00 compute-0 podman[138808]: 2025-12-01 20:40:00.242054931 +0000 UTC m=+0.151908037 container init 8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermat, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:40:00 compute-0 podman[138808]: 2025-12-01 20:40:00.251003056 +0000 UTC m=+0.160856132 container start 8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermat, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:40:00 compute-0 podman[138808]: 2025-12-01 20:40:00.256081198 +0000 UTC m=+0.165934274 container attach 8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:40:00 compute-0 systemd[1]: libpod-8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e.scope: Deactivated successfully.
Dec 01 20:40:00 compute-0 busy_fermat[138856]: 167 167
Dec 01 20:40:00 compute-0 podman[138808]: 2025-12-01 20:40:00.258259907 +0000 UTC m=+0.168113003 container died 8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermat, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7409126838b279554bdebc640547dddb65ea6a2013c5c2d12301fe3b657ab61e-merged.mount: Deactivated successfully.
Dec 01 20:40:00 compute-0 podman[138808]: 2025-12-01 20:40:00.293823399 +0000 UTC m=+0.203676475 container remove 8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:40:00 compute-0 systemd[1]: libpod-conmon-8191ed1f157f6cdb737f44b3aa93baa7966deef25fa839265bc139157fb09a2e.scope: Deactivated successfully.
Dec 01 20:40:00 compute-0 sudo[138982]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqtgwcxjjwijrewvcjrisrgtgpuqdhfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621600.1464539-202-193982836616902/AnsiballZ_stat.py'
Dec 01 20:40:00 compute-0 sudo[138982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:00 compute-0 podman[138953]: 2025-12-01 20:40:00.441869193 +0000 UTC m=+0.043271199 container create 72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:40:00 compute-0 systemd[1]: Started libpod-conmon-72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4.scope.
Dec 01 20:40:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd6e359e9268ee8cb386808fb0955cfeb586a4d49cd7c1020abb6be6c4ae05f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd6e359e9268ee8cb386808fb0955cfeb586a4d49cd7c1020abb6be6c4ae05f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd6e359e9268ee8cb386808fb0955cfeb586a4d49cd7c1020abb6be6c4ae05f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:40:00 compute-0 podman[138953]: 2025-12-01 20:40:00.425187252 +0000 UTC m=+0.026589278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd6e359e9268ee8cb386808fb0955cfeb586a4d49cd7c1020abb6be6c4ae05f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:40:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:00 compute-0 podman[138953]: 2025-12-01 20:40:00.592679695 +0000 UTC m=+0.194081731 container init 72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:40:00 compute-0 podman[138953]: 2025-12-01 20:40:00.599993627 +0000 UTC m=+0.201395633 container start 72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:40:00 compute-0 python3.9[138987]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:00 compute-0 sudo[138982]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:00 compute-0 podman[138953]: 2025-12-01 20:40:00.654579005 +0000 UTC m=+0.255981011 container attach 72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:40:00 compute-0 sudo[139131]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpjkudskkytlacmqwmkhfwazfkjywgsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621600.1464539-202-193982836616902/AnsiballZ_copy.py'
Dec 01 20:40:00 compute-0 sudo[139131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:01 compute-0 python3.9[139135]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621600.1464539-202-193982836616902/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:01 compute-0 sudo[139131]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:01 compute-0 lvm[139217]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:40:01 compute-0 lvm[139217]: VG ceph_vg0 finished
Dec 01 20:40:01 compute-0 lvm[139218]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:40:01 compute-0 lvm[139218]: VG ceph_vg1 finished
Dec 01 20:40:01 compute-0 lvm[139220]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:40:01 compute-0 lvm[139220]: VG ceph_vg2 finished
Dec 01 20:40:01 compute-0 infallible_chatelet[138990]: {}
Dec 01 20:40:01 compute-0 systemd[1]: libpod-72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4.scope: Deactivated successfully.
Dec 01 20:40:01 compute-0 systemd[1]: libpod-72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4.scope: Consumed 1.281s CPU time.
Dec 01 20:40:01 compute-0 podman[138953]: 2025-12-01 20:40:01.401662961 +0000 UTC m=+1.003064967 container died 72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 01 20:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd6e359e9268ee8cb386808fb0955cfeb586a4d49cd7c1020abb6be6c4ae05f7-merged.mount: Deactivated successfully.
Dec 01 20:40:01 compute-0 podman[138953]: 2025-12-01 20:40:01.452299513 +0000 UTC m=+1.053701519 container remove 72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 01 20:40:01 compute-0 systemd[1]: libpod-conmon-72e0ae47a3849e90be3e410624d181303654683123448b840f073cce571888b4.scope: Deactivated successfully.
Dec 01 20:40:01 compute-0 sudo[138748]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:40:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:40:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:40:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:40:01 compute-0 sudo[139290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:40:01 compute-0 sudo[139290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:40:01 compute-0 sudo[139290]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:01 compute-0 sudo[139387]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apmjxpatojxnmwrvhrcgdmepwnujfjhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621601.3624413-217-197036143761538/AnsiballZ_stat.py'
Dec 01 20:40:01 compute-0 sudo[139387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:01 compute-0 python3.9[139389]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:01 compute-0 sudo[139387]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:02 compute-0 ceph-mon[75880]: pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:40:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:40:02 compute-0 sudo[139512]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kahuehldzgzwkxymmhbfiwaodqqbbjhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621601.3624413-217-197036143761538/AnsiballZ_copy.py'
Dec 01 20:40:02 compute-0 sudo[139512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:02 compute-0 python3.9[139514]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621601.3624413-217-197036143761538/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:02 compute-0 sudo[139512]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:02 compute-0 sudo[139664]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtumqugtrsemenzxuqrwcptpbmovqdoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621602.594824-232-117446268981742/AnsiballZ_file.py'
Dec 01 20:40:02 compute-0 sudo[139664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:03 compute-0 python3.9[139666]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:03 compute-0 sudo[139664]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:40:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:40:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:40:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:40:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:40:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:40:03 compute-0 sudo[139816]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykxjrlazufsvbyyxvmvvllnfcqsdhkms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621603.1690404-240-145873689885018/AnsiballZ_command.py'
Dec 01 20:40:03 compute-0 sudo[139816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:03 compute-0 python3.9[139818]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:03 compute-0 sudo[139816]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:04 compute-0 sudo[139971]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxdawqsmxunxebujdyayfwuroyqdpxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621603.7748427-248-92799683212998/AnsiballZ_blockinfile.py'
Dec 01 20:40:04 compute-0 sudo[139971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:04 compute-0 ceph-mon[75880]: pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:04 compute-0 python3.9[139973]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:04 compute-0 sudo[139971]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:04 compute-0 sudo[140123]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeonyuoayhtnxerqfdimtvlopkvufbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621604.5915272-257-92659812086474/AnsiballZ_command.py'
Dec 01 20:40:04 compute-0 sudo[140123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:05 compute-0 python3.9[140125]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:05 compute-0 sudo[140123]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:05 compute-0 ceph-mon[75880]: pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:05 compute-0 sudo[140276]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfsxjpsfgtwscdpqzjjdsrfkduubyjom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621605.1948798-265-260963480864889/AnsiballZ_stat.py'
Dec 01 20:40:05 compute-0 sudo[140276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:05 compute-0 python3.9[140278]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:40:05 compute-0 sudo[140276]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:06 compute-0 sudo[140430]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odzfufgmzysoahiqlvovvjcargbhzyla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621605.905799-273-180921198437049/AnsiballZ_command.py'
Dec 01 20:40:06 compute-0 sudo[140430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:06 compute-0 python3.9[140432]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:06 compute-0 sudo[140430]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:06 compute-0 sudo[140585]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avswjzbgegziruvppmuyadcwraufbrjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621606.5379903-281-200733209091790/AnsiballZ_file.py'
Dec 01 20:40:06 compute-0 sudo[140585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:06 compute-0 python3.9[140587]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:06 compute-0 sudo[140585]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:08 compute-0 ceph-mon[75880]: pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:08 compute-0 python3.9[140737]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:40:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:09 compute-0 sudo[140888]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvfsohaizlblslpcroktzznopawqxhxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621609.062443-321-86580930311621/AnsiballZ_command.py'
Dec 01 20:40:09 compute-0 sudo[140888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:09 compute-0 ceph-mon[75880]: pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:09 compute-0 python3.9[140890]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:09 compute-0 ovs-vsctl[140891]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 01 20:40:09 compute-0 sudo[140888]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:10 compute-0 sudo[141041]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zynlioxoylswqsesxigywckbwkmowljf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621609.7272894-330-222821602492674/AnsiballZ_command.py'
Dec 01 20:40:10 compute-0 sudo[141041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:10 compute-0 python3.9[141043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:10 compute-0 sudo[141041]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:10 compute-0 sudo[141196]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhkaweagmwzuquazjfynqkoczwpllqay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621610.4352655-338-50005467857100/AnsiballZ_command.py'
Dec 01 20:40:10 compute-0 sudo[141196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:10 compute-0 python3.9[141198]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:10 compute-0 ovs-vsctl[141199]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 01 20:40:10 compute-0 sudo[141196]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:11 compute-0 python3.9[141349]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:40:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:12 compute-0 ceph-mon[75880]: pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:12 compute-0 sudo[141501]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alsymdqwkuogkqtmgkqwfcafabfarrhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621611.812504-355-214720777790707/AnsiballZ_file.py'
Dec 01 20:40:12 compute-0 sudo[141501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:12 compute-0 python3.9[141503]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:12 compute-0 sudo[141501]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:12 compute-0 sudo[141653]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liwookqxbpkrcrvfrufzzskkpionbpfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621612.445942-363-272877702946974/AnsiballZ_stat.py'
Dec 01 20:40:12 compute-0 sudo[141653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:12 compute-0 python3.9[141655]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:12 compute-0 sudo[141653]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:13 compute-0 sudo[141731]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cseqkdgdqrddizezlbezpgxfxmclwhnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621612.445942-363-272877702946974/AnsiballZ_file.py'
Dec 01 20:40:13 compute-0 sudo[141731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:13 compute-0 python3.9[141733]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:13 compute-0 sudo[141731]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:13 compute-0 sudo[141883]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikivgzwbuijeksilvxunuabhadbnclcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621613.4197178-363-27867039971475/AnsiballZ_stat.py'
Dec 01 20:40:13 compute-0 sudo[141883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:13 compute-0 python3.9[141885]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:13 compute-0 sudo[141883]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:14 compute-0 sudo[141961]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzejnseturvevxldokrlflgtbmrknidc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621613.4197178-363-27867039971475/AnsiballZ_file.py'
Dec 01 20:40:14 compute-0 sudo[141961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:14 compute-0 ceph-mon[75880]: pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:14 compute-0 python3.9[141963]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:14 compute-0 sudo[141961]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:14 compute-0 sudo[142113]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohthzknvuxjfmnickuvkhdoamynwmvho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621614.4297442-386-123138556517966/AnsiballZ_file.py'
Dec 01 20:40:14 compute-0 sudo[142113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:14 compute-0 python3.9[142115]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:14 compute-0 sudo[142113]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:15 compute-0 sudo[142265]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnjbzvuzxfegaucsoblynlmkpkgjhpna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621615.181536-394-165337913338025/AnsiballZ_stat.py'
Dec 01 20:40:15 compute-0 sudo[142265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:15 compute-0 python3.9[142267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:15 compute-0 sudo[142265]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:15 compute-0 sudo[142343]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btadppdlszwhcowwcusbbdqylkbqsfmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621615.181536-394-165337913338025/AnsiballZ_file.py'
Dec 01 20:40:15 compute-0 sudo[142343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:16 compute-0 python3.9[142345]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:16 compute-0 sudo[142343]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:16 compute-0 ceph-mon[75880]: pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:16 compute-0 sudo[142495]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djgftbaresgqzraghgwfaqgghzszcukn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621616.1658025-406-45227535586856/AnsiballZ_stat.py'
Dec 01 20:40:16 compute-0 sudo[142495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:16 compute-0 python3.9[142497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:16 compute-0 sudo[142495]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:16 compute-0 sudo[142573]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckywvjvxdqkpgqioevtvfezpkmvogtil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621616.1658025-406-45227535586856/AnsiballZ_file.py'
Dec 01 20:40:16 compute-0 sudo[142573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:17 compute-0 python3.9[142575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:17 compute-0 sudo[142573]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:17 compute-0 sudo[142725]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixuzxjaunhnzvqitqabclgfayrgvldtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621617.2349224-418-275191668603767/AnsiballZ_systemd.py'
Dec 01 20:40:17 compute-0 sudo[142725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:17 compute-0 ceph-mon[75880]: pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:17 compute-0 python3.9[142727]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:40:17 compute-0 systemd[1]: Reloading.
Dec 01 20:40:17 compute-0 systemd-rc-local-generator[142753]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:40:17 compute-0 systemd-sysv-generator[142758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:40:18 compute-0 sudo[142725]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:18 compute-0 sudo[142914]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnrosxxxylmwbfqqfbbpfkutvrcddcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621618.2995446-426-86420674084525/AnsiballZ_stat.py'
Dec 01 20:40:18 compute-0 sudo[142914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:18 compute-0 python3.9[142916]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:18 compute-0 sudo[142914]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:18 compute-0 sudo[142992]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njwilgcyyemhykuhwrnjptbzulqekxll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621618.2995446-426-86420674084525/AnsiballZ_file.py'
Dec 01 20:40:18 compute-0 sudo[142992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:19 compute-0 python3.9[142994]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:19 compute-0 sudo[142992]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:19 compute-0 sudo[143144]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kevseuyixippsupvlsqklrawddlpflrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621619.3040383-438-25711142377261/AnsiballZ_stat.py'
Dec 01 20:40:19 compute-0 sudo[143144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:19 compute-0 python3.9[143146]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:19 compute-0 sudo[143144]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:19 compute-0 sudo[143222]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rabqiqmxpvytrbzvaxxutdyxmesaarwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621619.3040383-438-25711142377261/AnsiballZ_file.py'
Dec 01 20:40:19 compute-0 sudo[143222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:20 compute-0 ceph-mon[75880]: pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:20 compute-0 python3.9[143224]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:20 compute-0 sudo[143222]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:20 compute-0 sudo[143374]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jokemjhxecmczypjrjsciqtpayrgokfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621620.368108-450-227556225822428/AnsiballZ_systemd.py'
Dec 01 20:40:20 compute-0 sudo[143374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:20 compute-0 python3.9[143376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:40:20 compute-0 systemd[1]: Reloading.
Dec 01 20:40:20 compute-0 systemd-rc-local-generator[143401]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:40:20 compute-0 systemd-sysv-generator[143405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:40:21 compute-0 systemd[1]: Starting Create netns directory...
Dec 01 20:40:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 01 20:40:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 01 20:40:21 compute-0 systemd[1]: Finished Create netns directory.
Dec 01 20:40:21 compute-0 sudo[143374]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.709877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621621709904, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 769, "num_deletes": 250, "total_data_size": 698950, "memory_usage": 713632, "flush_reason": "Manual Compaction"}
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 01 20:40:21 compute-0 sudo[143566]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uavrntvtqgpownwwfcqmtyymappqvuub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621621.4616008-460-26615248620876/AnsiballZ_file.py'
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621621715555, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 447394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6797, "largest_seqno": 7565, "table_properties": {"data_size": 444130, "index_size": 1108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8199, "raw_average_key_size": 19, "raw_value_size": 437304, "raw_average_value_size": 1038, "num_data_blocks": 52, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621553, "oldest_key_time": 1764621553, "file_creation_time": 1764621621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 5708 microseconds, and 1610 cpu microseconds.
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.715585) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 447394 bytes OK
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.715597) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.717056) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.717068) EVENT_LOG_v1 {"time_micros": 1764621621717065, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.717079) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 695069, prev total WAL file size 695069, number of live WAL files 2.
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.717458) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(436KB)], [20(5619KB)]
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621621717497, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 6201859, "oldest_snapshot_seqno": -1}
Dec 01 20:40:21 compute-0 sudo[143566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2614 keys, 4488149 bytes, temperature: kUnknown
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621621754331, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 4488149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4468578, "index_size": 11910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 60684, "raw_average_key_size": 23, "raw_value_size": 4419634, "raw_average_value_size": 1690, "num_data_blocks": 539, "num_entries": 2614, "num_filter_entries": 2614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764621621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.754845) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 4488149 bytes
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.756890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 120.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.5 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(23.9) write-amplify(10.0) OK, records in: 3097, records dropped: 483 output_compression: NoCompression
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.756914) EVENT_LOG_v1 {"time_micros": 1764621621756902, "job": 6, "event": "compaction_finished", "compaction_time_micros": 37177, "compaction_time_cpu_micros": 16434, "output_level": 6, "num_output_files": 1, "total_output_size": 4488149, "num_input_records": 3097, "num_output_records": 2614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621621757307, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621621758823, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.717394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.758984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.758988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.758989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.758991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:40:21 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:40:21.758992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:40:21 compute-0 python3.9[143568]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:21 compute-0 sudo[143566]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:22 compute-0 ceph-mon[75880]: pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:22 compute-0 sudo[143718]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpawqjbzvnluuzmjauaxambejtvwsjte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621622.0555308-468-276317178870634/AnsiballZ_stat.py'
Dec 01 20:40:22 compute-0 sudo[143718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:22 compute-0 python3.9[143720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:22 compute-0 sudo[143718]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:22 compute-0 sudo[143841]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkvhndqbdgwmbjppcnlfoekitdoichdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621622.0555308-468-276317178870634/AnsiballZ_copy.py'
Dec 01 20:40:22 compute-0 sudo[143841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:22 compute-0 python3.9[143843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621622.0555308-468-276317178870634/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:23 compute-0 sudo[143841]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:23 compute-0 sudo[143993]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofrmedutpkhwtciqbctesnguugohqmrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621623.3506846-485-120088773131874/AnsiballZ_file.py'
Dec 01 20:40:23 compute-0 sudo[143993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:23 compute-0 python3.9[143995]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:23 compute-0 sudo[143993]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:24 compute-0 ceph-mon[75880]: pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:24 compute-0 sudo[144145]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydaslwkqtwgbmwbjvhcslqllxkabiyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621624.0146115-493-195555768864750/AnsiballZ_stat.py'
Dec 01 20:40:24 compute-0 sudo[144145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:24 compute-0 python3.9[144147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:24 compute-0 sudo[144145]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:24 compute-0 sudo[144268]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhoxeijpkcdjdgotuemgvxykyfwjynw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621624.0146115-493-195555768864750/AnsiballZ_copy.py'
Dec 01 20:40:24 compute-0 sudo[144268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:24 compute-0 python3.9[144270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621624.0146115-493-195555768864750/.source.json _original_basename=.syq54iso follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:24 compute-0 sudo[144268]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:25 compute-0 sudo[144420]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaqchcrybyppvpzcmgjpcasbxxhntwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621625.2237961-508-2117989828267/AnsiballZ_file.py'
Dec 01 20:40:25 compute-0 sudo[144420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:25 compute-0 python3.9[144422]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:25 compute-0 sudo[144420]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:26 compute-0 sudo[144572]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvxwoiwqeaaclifxujvcftjpjygaqvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621625.8772583-516-105655745726871/AnsiballZ_stat.py'
Dec 01 20:40:26 compute-0 sudo[144572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:26 compute-0 ceph-mon[75880]: pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:26 compute-0 sudo[144572]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:26 compute-0 sudo[144695]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbftgjpvcnylpagqrhshxztnhrrcbyeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621625.8772583-516-105655745726871/AnsiballZ_copy.py'
Dec 01 20:40:26 compute-0 sudo[144695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:26 compute-0 sudo[144695]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:27 compute-0 ceph-mon[75880]: pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:27 compute-0 sudo[144847]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtupzeyfsrrrjxxqzruuxcmsuolcvruk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621627.1596794-533-117338476793359/AnsiballZ_container_config_data.py'
Dec 01 20:40:27 compute-0 sudo[144847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:27 compute-0 python3.9[144849]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 01 20:40:27 compute-0 sudo[144847]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:28 compute-0 sudo[144999]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otgxkmzrbpmxjoqfbmoybzraalegrawc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621627.9279253-542-262282881716156/AnsiballZ_container_config_hash.py'
Dec 01 20:40:28 compute-0 sudo[144999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:28 compute-0 python3.9[145001]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 01 20:40:28 compute-0 sudo[144999]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:29 compute-0 sudo[145151]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxeqrvhuqhnadslrkhrtkxqyypfqhnib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621628.7430596-551-127751730111014/AnsiballZ_podman_container_info.py'
Dec 01 20:40:29 compute-0 sudo[145151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:29 compute-0 python3.9[145153]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 01 20:40:29 compute-0 sudo[145151]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:29 compute-0 ceph-mon[75880]: pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:30 compute-0 sudo[145329]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdmhxsektponzunjohmmzpfpgeutvuzx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621630.033882-564-93369113728994/AnsiballZ_edpm_container_manage.py'
Dec 01 20:40:30 compute-0 sudo[145329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:30 compute-0 python3[145331]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 01 20:40:31 compute-0 ceph-mon[75880]: pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:40:32
Dec 01 20:40:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:40:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:40:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'images']
Dec 01 20:40:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:40:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:40:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:40:33 compute-0 ceph-mon[75880]: pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:35 compute-0 ceph-mon[75880]: pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:36 compute-0 podman[145344]: 2025-12-01 20:40:36.187408431 +0000 UTC m=+5.271040969 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 01 20:40:36 compute-0 podman[145463]: 2025-12-01 20:40:36.309710166 +0000 UTC m=+0.039495969 container create b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 01 20:40:36 compute-0 podman[145463]: 2025-12-01 20:40:36.287920221 +0000 UTC m=+0.017706054 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 01 20:40:36 compute-0 python3[145331]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 01 20:40:36 compute-0 sudo[145329]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:36 compute-0 sudo[145651]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzgrzwqyyhyduyyvzmxxefvsmpctgit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621636.5943308-572-165672724124400/AnsiballZ_stat.py'
Dec 01 20:40:36 compute-0 sudo[145651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:37 compute-0 python3.9[145653]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:40:37 compute-0 sudo[145651]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:37 compute-0 sudo[145805]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqtpjcztruaihlguekerpwbzcfgyrrqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621637.2935913-581-275385329487026/AnsiballZ_file.py'
Dec 01 20:40:37 compute-0 sudo[145805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:38 compute-0 ceph-mon[75880]: pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:38 compute-0 python3.9[145807]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:38 compute-0 sudo[145805]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:38 compute-0 sudo[145882]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayvyxwigiznyyjqxhnetgogjjilayibl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621637.2935913-581-275385329487026/AnsiballZ_stat.py'
Dec 01 20:40:38 compute-0 sudo[145882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:38 compute-0 python3.9[145884]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:40:38 compute-0 sudo[145882]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:38 compute-0 sudo[146033]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnucmjidxxbonkhigxodlebdtzsvlpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621638.5982947-581-260590735876917/AnsiballZ_copy.py'
Dec 01 20:40:38 compute-0 sudo[146033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:39 compute-0 python3.9[146035]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621638.5982947-581-260590735876917/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:40:39 compute-0 sudo[146033]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:39 compute-0 sudo[146109]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbkcmbaumxqyzqaallpgblwdqmuggpio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621638.5982947-581-260590735876917/AnsiballZ_systemd.py'
Dec 01 20:40:39 compute-0 sudo[146109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:39 compute-0 python3.9[146111]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:40:39 compute-0 systemd[1]: Reloading.
Dec 01 20:40:39 compute-0 systemd-rc-local-generator[146142]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:40:39 compute-0 systemd-sysv-generator[146146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:40:39 compute-0 sudo[146109]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:40 compute-0 ceph-mon[75880]: pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:40 compute-0 sudo[146221]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szzmrgbpdnummpjciiyrjegpvjmtotbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621638.5982947-581-260590735876917/AnsiballZ_systemd.py'
Dec 01 20:40:40 compute-0 sudo[146221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:40:40 compute-0 python3.9[146223]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:40:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:40 compute-0 systemd[1]: Reloading.
Dec 01 20:40:40 compute-0 systemd-rc-local-generator[146252]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:40:40 compute-0 systemd-sysv-generator[146255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:40:40 compute-0 systemd[1]: Starting ovn_controller container...
Dec 01 20:40:41 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e349b93af145a0ec765a95bdc2b6247f48231ee73bc77d1c34f267dad29bf3a/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 01 20:40:41 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4.
Dec 01 20:40:41 compute-0 podman[146264]: 2025-12-01 20:40:41.049472979 +0000 UTC m=+0.113277138 container init b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + sudo -E kolla_set_configs
Dec 01 20:40:41 compute-0 podman[146264]: 2025-12-01 20:40:41.087779408 +0000 UTC m=+0.151583487 container start b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:40:41 compute-0 edpm-start-podman-container[146264]: ovn_controller
Dec 01 20:40:41 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 01 20:40:41 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 01 20:40:41 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 01 20:40:41 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 01 20:40:41 compute-0 systemd[146314]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 01 20:40:41 compute-0 edpm-start-podman-container[146263]: Creating additional drop-in dependency for "ovn_controller" (b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4)
Dec 01 20:40:41 compute-0 podman[146285]: 2025-12-01 20:40:41.186503942 +0000 UTC m=+0.077903971 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 01 20:40:41 compute-0 systemd[1]: b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4-758f9c6f716b2934.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 20:40:41 compute-0 systemd[1]: b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4-758f9c6f716b2934.service: Failed with result 'exit-code'.
Dec 01 20:40:41 compute-0 systemd[1]: Reloading.
Dec 01 20:40:41 compute-0 systemd[146314]: Queued start job for default target Main User Target.
Dec 01 20:40:41 compute-0 systemd-sysv-generator[146369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:40:41 compute-0 systemd-rc-local-generator[146366]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:40:41 compute-0 systemd[146314]: Created slice User Application Slice.
Dec 01 20:40:41 compute-0 systemd[146314]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 01 20:40:41 compute-0 systemd[146314]: Started Daily Cleanup of User's Temporary Directories.
Dec 01 20:40:41 compute-0 systemd[146314]: Reached target Paths.
Dec 01 20:40:41 compute-0 systemd[146314]: Reached target Timers.
Dec 01 20:40:41 compute-0 systemd[146314]: Starting D-Bus User Message Bus Socket...
Dec 01 20:40:41 compute-0 systemd[146314]: Starting Create User's Volatile Files and Directories...
Dec 01 20:40:41 compute-0 systemd[146314]: Finished Create User's Volatile Files and Directories.
Dec 01 20:40:41 compute-0 systemd[146314]: Listening on D-Bus User Message Bus Socket.
Dec 01 20:40:41 compute-0 systemd[146314]: Reached target Sockets.
Dec 01 20:40:41 compute-0 systemd[146314]: Reached target Basic System.
Dec 01 20:40:41 compute-0 systemd[146314]: Reached target Main User Target.
Dec 01 20:40:41 compute-0 systemd[146314]: Startup finished in 141ms.
Dec 01 20:40:41 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 01 20:40:41 compute-0 systemd[1]: Started ovn_controller container.
Dec 01 20:40:41 compute-0 systemd[1]: Started Session c1 of User root.
Dec 01 20:40:41 compute-0 sudo[146221]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:41 compute-0 ovn_controller[146279]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 01 20:40:41 compute-0 ovn_controller[146279]: INFO:__main__:Validating config file
Dec 01 20:40:41 compute-0 ovn_controller[146279]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 01 20:40:41 compute-0 ovn_controller[146279]: INFO:__main__:Writing out command to execute
Dec 01 20:40:41 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: ++ cat /run_command
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + ARGS=
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + sudo kolla_copy_cacerts
Dec 01 20:40:41 compute-0 systemd[1]: Started Session c2 of User root.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + [[ ! -n '' ]]
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + . kolla_extend_start
Dec 01 20:40:41 compute-0 ovn_controller[146279]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + umask 0022
Dec 01 20:40:41 compute-0 ovn_controller[146279]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 01 20:40:41 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6358] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6364] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6372] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6375] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6377] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 01 20:40:41 compute-0 kernel: br-int: entered promiscuous mode
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 01 20:40:41 compute-0 ovn_controller[146279]: 2025-12-01T20:40:41Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6588] manager: (ovn-d930b8-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 01 20:40:41 compute-0 systemd-udevd[146425]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 20:40:41 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 01 20:40:41 compute-0 systemd-udevd[146429]: Network interface NamePolicy= disabled on kernel command line.
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6756] device (genev_sys_6081): carrier: link connected
Dec 01 20:40:41 compute-0 NetworkManager[49710]: <info>  [1764621641.6759] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 01 20:40:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:41 compute-0 sudo[146542]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnnjkdqbpgwmqcdxgsuxsidecdqvrqui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621641.6946387-609-248439942076759/AnsiballZ_command.py'
Dec 01 20:40:41 compute-0 sudo[146542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:42 compute-0 ceph-mon[75880]: pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:42 compute-0 python3.9[146544]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:42 compute-0 ovs-vsctl[146545]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 01 20:40:42 compute-0 sudo[146542]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:42 compute-0 sudo[146695]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skisslvujojsgrkasnnvsuziidzxqwqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621642.346797-617-172591869688295/AnsiballZ_command.py'
Dec 01 20:40:42 compute-0 sudo[146695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:44 compute-0 python3.9[146697]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:44 compute-0 ovs-vsctl[146699]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 01 20:40:44 compute-0 sudo[146695]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:44 compute-0 ceph-mon[75880]: pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:45 compute-0 sudo[146850]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tafdjcyayriekczcxhituiosuxltdxlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621644.7346976-631-37090732927406/AnsiballZ_command.py'
Dec 01 20:40:45 compute-0 sudo[146850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:45 compute-0 python3.9[146852]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:40:45 compute-0 ovs-vsctl[146853]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 01 20:40:45 compute-0 sudo[146850]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:45 compute-0 ceph-mon[75880]: pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:45 compute-0 sshd-session[135062]: Connection closed by 192.168.122.30 port 59558
Dec 01 20:40:45 compute-0 sshd-session[135059]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:40:45 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec 01 20:40:45 compute-0 systemd[1]: session-47.scope: Consumed 53.414s CPU time.
Dec 01 20:40:45 compute-0 systemd-logind[796]: Session 47 logged out. Waiting for processes to exit.
Dec 01 20:40:45 compute-0 systemd-logind[796]: Removed session 47.
Dec 01 20:40:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:47 compute-0 ceph-mon[75880]: pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:49 compute-0 ceph-mon[75880]: pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:51 compute-0 sshd-session[146879]: Accepted publickey for zuul from 192.168.122.30 port 34080 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:40:51 compute-0 systemd-logind[796]: New session 49 of user zuul.
Dec 01 20:40:51 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 01 20:40:51 compute-0 sshd-session[146879]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:40:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:51 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 01 20:40:51 compute-0 systemd[146314]: Activating special unit Exit the Session...
Dec 01 20:40:51 compute-0 systemd[146314]: Stopped target Main User Target.
Dec 01 20:40:51 compute-0 systemd[146314]: Stopped target Basic System.
Dec 01 20:40:51 compute-0 systemd[146314]: Stopped target Paths.
Dec 01 20:40:51 compute-0 systemd[146314]: Stopped target Sockets.
Dec 01 20:40:51 compute-0 systemd[146314]: Stopped target Timers.
Dec 01 20:40:51 compute-0 systemd[146314]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 01 20:40:51 compute-0 systemd[146314]: Closed D-Bus User Message Bus Socket.
Dec 01 20:40:51 compute-0 systemd[146314]: Stopped Create User's Volatile Files and Directories.
Dec 01 20:40:51 compute-0 systemd[146314]: Removed slice User Application Slice.
Dec 01 20:40:51 compute-0 systemd[146314]: Reached target Shutdown.
Dec 01 20:40:51 compute-0 systemd[146314]: Finished Exit the Session.
Dec 01 20:40:51 compute-0 systemd[146314]: Reached target Exit the Session.
Dec 01 20:40:51 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 01 20:40:51 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 01 20:40:51 compute-0 ceph-mon[75880]: pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:51 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 01 20:40:51 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 01 20:40:51 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 01 20:40:51 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 01 20:40:51 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 01 20:40:52 compute-0 python3.9[147034]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:40:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:53 compute-0 sudo[147188]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqwoddrquqdityatqszmflniwzbwcnho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621652.7264898-34-122125921195301/AnsiballZ_file.py'
Dec 01 20:40:53 compute-0 sudo[147188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:53 compute-0 python3.9[147190]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:53 compute-0 sudo[147188]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:53 compute-0 ceph-mon[75880]: pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:53 compute-0 sudo[147340]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bayywjeqoisuvhglybgsuaprkrjegalg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621653.5410492-34-30180042135348/AnsiballZ_file.py'
Dec 01 20:40:53 compute-0 sudo[147340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:54 compute-0 python3.9[147342]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:54 compute-0 sudo[147340]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:54 compute-0 sudo[147492]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnlfsbgbacrqxcnhmyjmrsehdvcjmozq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621654.2361603-34-126556057383311/AnsiballZ_file.py'
Dec 01 20:40:54 compute-0 sudo[147492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:54 compute-0 python3.9[147494]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:54 compute-0 sudo[147492]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:55 compute-0 sudo[147644]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vspwymbihvdphjpueszrquqvbvrdkqaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621654.8314686-34-255124456812204/AnsiballZ_file.py'
Dec 01 20:40:55 compute-0 sudo[147644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:55 compute-0 python3.9[147646]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:55 compute-0 sudo[147644]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:55 compute-0 sudo[147796]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nborqfekwjvxdowfpwwraednjadtglby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621655.4243343-34-268019061688913/AnsiballZ_file.py'
Dec 01 20:40:55 compute-0 sudo[147796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:55 compute-0 python3.9[147798]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:55 compute-0 sudo[147796]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:55 compute-0 ceph-mon[75880]: pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:56 compute-0 python3.9[147948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:40:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:40:57 compute-0 sudo[148098]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftckqpmcazxmjunsmbbzmuhofefqesuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621656.8635757-78-154453346935505/AnsiballZ_seboolean.py'
Dec 01 20:40:57 compute-0 sudo[148098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:40:57 compute-0 python3.9[148100]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 01 20:40:57 compute-0 ceph-mon[75880]: pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:58 compute-0 sudo[148098]: pam_unix(sudo:session): session closed for user root
Dec 01 20:40:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:40:58 compute-0 python3.9[148250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:40:59 compute-0 python3.9[148371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621658.3416839-86-83979092169192/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:40:59 compute-0 ceph-mon[75880]: pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:00 compute-0 python3.9[148522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:00 compute-0 python3.9[148643]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621659.8648207-101-160565351364369/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:01 compute-0 sudo[148793]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oravrwzieieunpvgnjzvmrerktmujaun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621661.0732603-118-73642439774007/AnsiballZ_setup.py'
Dec 01 20:41:01 compute-0 sudo[148793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:01 compute-0 sudo[148796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:41:01 compute-0 sudo[148796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:01 compute-0 sudo[148796]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:01 compute-0 python3.9[148795]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:41:01 compute-0 sudo[148821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:41:01 compute-0 sudo[148821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:01 compute-0 sudo[148793]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:01 compute-0 ceph-mon[75880]: pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:02 compute-0 sudo[148821]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:41:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:41:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:41:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:41:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:41:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:41:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:41:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:41:02 compute-0 sudo[148957]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kguwrydyqkofyqduudqsshjalqaducra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621661.0732603-118-73642439774007/AnsiballZ_dnf.py'
Dec 01 20:41:02 compute-0 sudo[148957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:02 compute-0 sudo[148958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:41:02 compute-0 sudo[148958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:02 compute-0 sudo[148958]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:02 compute-0 sudo[148985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:41:02 compute-0 sudo[148985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:02 compute-0 python3.9[148971]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:41:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:02 compute-0 podman[149022]: 2025-12-01 20:41:02.675680278 +0000 UTC m=+0.043628090 container create d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:41:02 compute-0 systemd[1]: Started libpod-conmon-d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a.scope.
Dec 01 20:41:02 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:41:02 compute-0 podman[149022]: 2025-12-01 20:41:02.746756571 +0000 UTC m=+0.114704403 container init d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:41:02 compute-0 podman[149022]: 2025-12-01 20:41:02.655894318 +0000 UTC m=+0.023842160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:41:02 compute-0 podman[149022]: 2025-12-01 20:41:02.752952208 +0000 UTC m=+0.120900020 container start d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:41:02 compute-0 podman[149022]: 2025-12-01 20:41:02.756363657 +0000 UTC m=+0.124311489 container attach d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:41:02 compute-0 elastic_sinoussi[149039]: 167 167
Dec 01 20:41:02 compute-0 systemd[1]: libpod-d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a.scope: Deactivated successfully.
Dec 01 20:41:02 compute-0 podman[149022]: 2025-12-01 20:41:02.761471639 +0000 UTC m=+0.129419471 container died d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:41:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fef3d7a3b7f2acbf7ea46b85d0433a5faa8cce855b1e60e984d867dc237d3cf0-merged.mount: Deactivated successfully.
Dec 01 20:41:02 compute-0 podman[149022]: 2025-12-01 20:41:02.806737301 +0000 UTC m=+0.174685113 container remove d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:41:02 compute-0 systemd[1]: libpod-conmon-d95c1b8c8c65f1bffcfb11051dcffc62fc6ab8783b9c0976df8404fe3f70b86a.scope: Deactivated successfully.
Dec 01 20:41:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:41:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:41:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:41:03 compute-0 podman[149064]: 2025-12-01 20:41:03.011874632 +0000 UTC m=+0.041780451 container create dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:41:03 compute-0 systemd[1]: Started libpod-conmon-dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d.scope.
Dec 01 20:41:03 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed8b49dbe56d8801a4024e96657de9adf0ee63c88c3cc446e63503967412bc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:03 compute-0 podman[149064]: 2025-12-01 20:41:02.995094368 +0000 UTC m=+0.025000217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed8b49dbe56d8801a4024e96657de9adf0ee63c88c3cc446e63503967412bc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed8b49dbe56d8801a4024e96657de9adf0ee63c88c3cc446e63503967412bc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed8b49dbe56d8801a4024e96657de9adf0ee63c88c3cc446e63503967412bc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed8b49dbe56d8801a4024e96657de9adf0ee63c88c3cc446e63503967412bc1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:03 compute-0 podman[149064]: 2025-12-01 20:41:03.1079521 +0000 UTC m=+0.137857949 container init dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:41:03 compute-0 podman[149064]: 2025-12-01 20:41:03.119668834 +0000 UTC m=+0.149574653 container start dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec 01 20:41:03 compute-0 podman[149064]: 2025-12-01 20:41:03.125648594 +0000 UTC m=+0.155554413 container attach dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:41:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:41:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:41:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:41:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:41:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:41:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:41:03 compute-0 dreamy_mirzakhani[149081]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:41:03 compute-0 dreamy_mirzakhani[149081]: --> All data devices are unavailable
Dec 01 20:41:03 compute-0 systemd[1]: libpod-dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d.scope: Deactivated successfully.
Dec 01 20:41:03 compute-0 podman[149064]: 2025-12-01 20:41:03.549493979 +0000 UTC m=+0.579399808 container died dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:41:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed8b49dbe56d8801a4024e96657de9adf0ee63c88c3cc446e63503967412bc1-merged.mount: Deactivated successfully.
Dec 01 20:41:03 compute-0 podman[149064]: 2025-12-01 20:41:03.58691311 +0000 UTC m=+0.616818929 container remove dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:41:03 compute-0 systemd[1]: libpod-conmon-dd004ebd6bf0d6538186ab7b7b9781ff2586affd4ad01713f3fe17bcad96d39d.scope: Deactivated successfully.
Dec 01 20:41:03 compute-0 sudo[148985]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:03 compute-0 sudo[149113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:41:03 compute-0 sudo[149113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:03 compute-0 sudo[149113]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:03 compute-0 sudo[149138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:41:03 compute-0 sudo[149138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:03 compute-0 sudo[148957]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:03 compute-0 ceph-mon[75880]: pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:04 compute-0 podman[149241]: 2025-12-01 20:41:04.083663413 +0000 UTC m=+0.068919485 container create 12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:41:04 compute-0 systemd[1]: Started libpod-conmon-12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2.scope.
Dec 01 20:41:04 compute-0 podman[149241]: 2025-12-01 20:41:04.03346265 +0000 UTC m=+0.018718742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:41:04 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:41:04 compute-0 podman[149241]: 2025-12-01 20:41:04.177392032 +0000 UTC m=+0.162648134 container init 12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shockley, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:41:04 compute-0 podman[149241]: 2025-12-01 20:41:04.18490668 +0000 UTC m=+0.170162762 container start 12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:41:04 compute-0 podman[149241]: 2025-12-01 20:41:04.188211612 +0000 UTC m=+0.173467684 container attach 12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:41:04 compute-0 quizzical_shockley[149265]: 167 167
Dec 01 20:41:04 compute-0 systemd[1]: libpod-12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2.scope: Deactivated successfully.
Dec 01 20:41:04 compute-0 podman[149241]: 2025-12-01 20:41:04.190171505 +0000 UTC m=+0.175427577 container died 12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:41:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f3c23ca8bf22a7f9b78762ebe3471bdab876d20ce07ca1f10aa4efdba8d6067-merged.mount: Deactivated successfully.
Dec 01 20:41:04 compute-0 podman[149241]: 2025-12-01 20:41:04.303072252 +0000 UTC m=+0.288328324 container remove 12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 01 20:41:04 compute-0 systemd[1]: libpod-conmon-12c6e10231ab11ca04d21fe624749331a3cf31fedc54148dcddd687f255676d2.scope: Deactivated successfully.
Dec 01 20:41:04 compute-0 podman[149314]: 2025-12-01 20:41:04.467296254 +0000 UTC m=+0.052605863 container create 82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:41:04 compute-0 systemd[1]: Started libpod-conmon-82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3.scope.
Dec 01 20:41:04 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:41:04 compute-0 sudo[149385]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luprsgkonpyhnvqzinexponbkdnqilgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621663.973778-130-232497685366831/AnsiballZ_systemd.py'
Dec 01 20:41:04 compute-0 podman[149314]: 2025-12-01 20:41:04.435723128 +0000 UTC m=+0.021032767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f676917d9aa4864122e37e7f9c4609afeb06c1c363ceac8d5e9dab1f4fd46619/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:04 compute-0 sudo[149385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f676917d9aa4864122e37e7f9c4609afeb06c1c363ceac8d5e9dab1f4fd46619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f676917d9aa4864122e37e7f9c4609afeb06c1c363ceac8d5e9dab1f4fd46619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f676917d9aa4864122e37e7f9c4609afeb06c1c363ceac8d5e9dab1f4fd46619/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:04 compute-0 podman[149314]: 2025-12-01 20:41:04.554283475 +0000 UTC m=+0.139593104 container init 82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_robinson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:41:04 compute-0 podman[149314]: 2025-12-01 20:41:04.56337284 +0000 UTC m=+0.148682449 container start 82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:41:04 compute-0 podman[149314]: 2025-12-01 20:41:04.567142429 +0000 UTC m=+0.152452058 container attach 82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_robinson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:41:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]: {
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:     "0": [
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:         {
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "devices": [
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "/dev/loop3"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             ],
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_name": "ceph_lv0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_size": "21470642176",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "name": "ceph_lv0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "tags": {
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cluster_name": "ceph",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.crush_device_class": "",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.encrypted": "0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.objectstore": "bluestore",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osd_id": "0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.type": "block",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.vdo": "0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.with_tpm": "0"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             },
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "type": "block",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "vg_name": "ceph_vg0"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:         }
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:     ],
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:     "1": [
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:         {
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "devices": [
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "/dev/loop4"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             ],
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_name": "ceph_lv1",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_size": "21470642176",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "name": "ceph_lv1",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "tags": {
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cluster_name": "ceph",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.crush_device_class": "",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.encrypted": "0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.objectstore": "bluestore",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osd_id": "1",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.type": "block",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.vdo": "0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.with_tpm": "0"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             },
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "type": "block",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "vg_name": "ceph_vg1"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:         }
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:     ],
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:     "2": [
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:         {
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "devices": [
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "/dev/loop5"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             ],
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_name": "ceph_lv2",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_size": "21470642176",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "name": "ceph_lv2",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "tags": {
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.cluster_name": "ceph",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.crush_device_class": "",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.encrypted": "0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.objectstore": "bluestore",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osd_id": "2",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.type": "block",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.vdo": "0",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:                 "ceph.with_tpm": "0"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             },
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "type": "block",
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:             "vg_name": "ceph_vg2"
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:         }
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]:     ]
Dec 01 20:41:04 compute-0 wonderful_robinson[149379]: }
Dec 01 20:41:04 compute-0 python3.9[149387]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:41:04 compute-0 systemd[1]: libpod-82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3.scope: Deactivated successfully.
Dec 01 20:41:04 compute-0 podman[149314]: 2025-12-01 20:41:04.871141391 +0000 UTC m=+0.456451020 container died 82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_robinson, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:41:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f676917d9aa4864122e37e7f9c4609afeb06c1c363ceac8d5e9dab1f4fd46619-merged.mount: Deactivated successfully.
Dec 01 20:41:04 compute-0 sudo[149385]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:04 compute-0 podman[149314]: 2025-12-01 20:41:04.959082116 +0000 UTC m=+0.544391725 container remove 82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_robinson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 01 20:41:04 compute-0 systemd[1]: libpod-conmon-82c3a3e2f4f466621c985d5159ca8b677f351f5604cbc589b938ce9427ca87f3.scope: Deactivated successfully.
Dec 01 20:41:04 compute-0 sudo[149138]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:05 compute-0 sudo[149433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:41:05 compute-0 sudo[149433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:05 compute-0 sudo[149433]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:05 compute-0 sudo[149479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:41:05 compute-0 sudo[149479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:05 compute-0 podman[149621]: 2025-12-01 20:41:05.416798752 +0000 UTC m=+0.094154277 container create 7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:41:05 compute-0 podman[149621]: 2025-12-01 20:41:05.343268177 +0000 UTC m=+0.020623632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:41:05 compute-0 python3.9[149608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:05 compute-0 systemd[1]: Started libpod-conmon-7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d.scope.
Dec 01 20:41:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:41:05 compute-0 podman[149621]: 2025-12-01 20:41:05.513941247 +0000 UTC m=+0.191296712 container init 7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:41:05 compute-0 podman[149621]: 2025-12-01 20:41:05.521570939 +0000 UTC m=+0.198926364 container start 7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shaw, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:41:05 compute-0 podman[149621]: 2025-12-01 20:41:05.524298299 +0000 UTC m=+0.201653734 container attach 7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:41:05 compute-0 silly_shaw[149637]: 167 167
Dec 01 20:41:05 compute-0 systemd[1]: libpod-7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d.scope: Deactivated successfully.
Dec 01 20:41:05 compute-0 podman[149621]: 2025-12-01 20:41:05.527375253 +0000 UTC m=+0.204730688 container died 7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shaw, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d5e791dd5f1355c82595c65566a5749e7145491b76ed5e4d28eb23d9620875f-merged.mount: Deactivated successfully.
Dec 01 20:41:05 compute-0 podman[149621]: 2025-12-01 20:41:05.614749778 +0000 UTC m=+0.292105213 container remove 7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_shaw, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:41:05 compute-0 systemd[1]: libpod-conmon-7e341d1d11a04a1dc49958cec83c842a87eee0183404a06d454ceb0612ae068d.scope: Deactivated successfully.
Dec 01 20:41:05 compute-0 podman[149778]: 2025-12-01 20:41:05.783972605 +0000 UTC m=+0.046060812 container create d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 20:41:05 compute-0 systemd[1]: Started libpod-conmon-d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402.scope.
Dec 01 20:41:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b963a2c2a484669551213f9b77848bcf37bc140f2222c65bf1120de60f928c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b963a2c2a484669551213f9b77848bcf37bc140f2222c65bf1120de60f928c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b963a2c2a484669551213f9b77848bcf37bc140f2222c65bf1120de60f928c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b963a2c2a484669551213f9b77848bcf37bc140f2222c65bf1120de60f928c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:05 compute-0 podman[149778]: 2025-12-01 20:41:05.760692595 +0000 UTC m=+0.022780822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:41:05 compute-0 podman[149778]: 2025-12-01 20:41:05.871118921 +0000 UTC m=+0.133207148 container init d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:41:05 compute-0 podman[149778]: 2025-12-01 20:41:05.884330569 +0000 UTC m=+0.146418776 container start d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:41:05 compute-0 podman[149778]: 2025-12-01 20:41:05.888269924 +0000 UTC m=+0.150358131 container attach d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:41:05 compute-0 python3.9[149782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621665.0711522-138-186472466636699/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:06 compute-0 ceph-mon[75880]: pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:06 compute-0 python3.9[149979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:06 compute-0 lvm[150051]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:41:06 compute-0 lvm[150051]: VG ceph_vg0 finished
Dec 01 20:41:06 compute-0 lvm[150053]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:41:06 compute-0 lvm[150053]: VG ceph_vg1 finished
Dec 01 20:41:06 compute-0 lvm[150071]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:41:06 compute-0 lvm[150071]: VG ceph_vg2 finished
Dec 01 20:41:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:06 compute-0 sharp_taussig[149797]: {}
Dec 01 20:41:06 compute-0 systemd[1]: libpod-d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402.scope: Deactivated successfully.
Dec 01 20:41:06 compute-0 systemd[1]: libpod-d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402.scope: Consumed 1.272s CPU time.
Dec 01 20:41:06 compute-0 podman[149778]: 2025-12-01 20:41:06.682022414 +0000 UTC m=+0.944110641 container died d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:41:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b963a2c2a484669551213f9b77848bcf37bc140f2222c65bf1120de60f928c3-merged.mount: Deactivated successfully.
Dec 01 20:41:06 compute-0 podman[149778]: 2025-12-01 20:41:06.730123669 +0000 UTC m=+0.992211876 container remove d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:41:06 compute-0 systemd[1]: libpod-conmon-d8487eac6c9ac2f88d696c94ef215b9e5be6ae09e93e0cf000f1d27256a80402.scope: Deactivated successfully.
Dec 01 20:41:06 compute-0 sudo[149479]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:41:06 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:41:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:41:06 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:41:06 compute-0 python3.9[150163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621666.0597305-138-257506233063442/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:06 compute-0 sudo[150164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:41:06 compute-0 sudo[150164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:41:06 compute-0 sudo[150164]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:07 compute-0 ceph-mon[75880]: pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:41:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:41:08 compute-0 python3.9[150338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:08 compute-0 python3.9[150459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621667.5947883-182-28223843070695/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:09 compute-0 python3.9[150609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:09 compute-0 python3.9[150730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621668.673836-182-255461011896042/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:09 compute-0 ceph-mon[75880]: pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:10 compute-0 python3.9[150880]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:41:10 compute-0 sudo[151032]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkbqpsopxqnwgqrcuzhghlhavbvmgcey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621670.3661127-220-46622919825869/AnsiballZ_file.py'
Dec 01 20:41:10 compute-0 sudo[151032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:10 compute-0 python3.9[151034]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:10 compute-0 sudo[151032]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:11 compute-0 sudo[151184]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgbipmcyhnrychpohdpwrzxwogxfbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621670.9357247-228-6746160149992/AnsiballZ_stat.py'
Dec 01 20:41:11 compute-0 sudo[151184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:11 compute-0 python3.9[151186]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:11 compute-0 sudo[151184]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:11 compute-0 sudo[151273]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsyczajdbhspgmldyznbugflhzspqdig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621670.9357247-228-6746160149992/AnsiballZ_file.py'
Dec 01 20:41:11 compute-0 sudo[151273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:11 compute-0 ovn_controller[146279]: 2025-12-01T20:41:11Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Dec 01 20:41:11 compute-0 ovn_controller[146279]: 2025-12-01T20:41:11Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 01 20:41:11 compute-0 podman[151236]: 2025-12-01 20:41:11.669060325 +0000 UTC m=+0.108754265 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:41:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:11 compute-0 python3.9[151281]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:11 compute-0 sudo[151273]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:11 compute-0 ceph-mon[75880]: pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:12 compute-0 sudo[151441]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nykjpmszllygiawblghnjalydxwwpvxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621671.9172952-228-94850350597738/AnsiballZ_stat.py'
Dec 01 20:41:12 compute-0 sudo[151441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:12 compute-0 python3.9[151443]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:12 compute-0 sudo[151441]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:12 compute-0 sudo[151519]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdksitjojodtanfbjzndkuaoptnigxhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621671.9172952-228-94850350597738/AnsiballZ_file.py'
Dec 01 20:41:12 compute-0 sudo[151519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:12 compute-0 python3.9[151521]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:12 compute-0 sudo[151519]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:13 compute-0 sudo[151671]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqngzxngldzpbgdeadfjvrcxmursdzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621672.9260821-251-106130844686840/AnsiballZ_file.py'
Dec 01 20:41:13 compute-0 sudo[151671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:13 compute-0 python3.9[151673]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:13 compute-0 sudo[151671]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:13 compute-0 sudo[151823]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwfblgmbrtsyzlxgvsavxjzoozfyxlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621673.479467-259-43710303060206/AnsiballZ_stat.py'
Dec 01 20:41:13 compute-0 sudo[151823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:13 compute-0 python3.9[151825]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:13 compute-0 sudo[151823]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:14 compute-0 ceph-mon[75880]: pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:14 compute-0 sudo[151901]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-addtmvscptcyamjzerkeqlgbzyrwljsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621673.479467-259-43710303060206/AnsiballZ_file.py'
Dec 01 20:41:14 compute-0 sudo[151901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:14 compute-0 python3.9[151903]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:14 compute-0 sudo[151901]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:41:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1817 writes, 7830 keys, 1817 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 1817 writes, 1817 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1817 writes, 7830 keys, 1817 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s
                                           Interval WAL: 1817 writes, 1817 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     67.0      0.09              0.01         3    0.030       0      0       0.0       0.0
                                             L6      1/0    4.28 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     60.7     51.8      0.19              0.03         2    0.094    6062    772       0.0       0.0
                                            Sum      1/0    4.28 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     41.2     56.6      0.28              0.04         5    0.056    6062    772       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     41.8     57.2      0.27              0.04         4    0.068    6062    772       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     60.7     51.8      0.19              0.03         2    0.094    6062    772       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     69.3      0.09              0.01         2    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.006, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.3 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a3cf2218d0#2 capacity: 308.00 MB usage: 602.38 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(39,532.34 KB,0.168788%) FilterBlock(6,24.23 KB,0.00768389%) IndexBlock(6,45.80 KB,0.0145206%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 01 20:41:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:14 compute-0 sudo[152053]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuowsbehmowsqoliptavbslbjeyrohoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621674.4612992-271-98821908037218/AnsiballZ_stat.py'
Dec 01 20:41:14 compute-0 sudo[152053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:14 compute-0 python3.9[152055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:14 compute-0 sudo[152053]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:15 compute-0 sudo[152131]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmkleyxbuquujaqqkwhofvlrzaatwuof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621674.4612992-271-98821908037218/AnsiballZ_file.py'
Dec 01 20:41:15 compute-0 sudo[152131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:15 compute-0 python3.9[152133]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:15 compute-0 sudo[152131]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:15 compute-0 sudo[152283]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qokyqvxdxnhxjrxnixfxsairoapgcfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621675.475754-283-203076675020704/AnsiballZ_systemd.py'
Dec 01 20:41:15 compute-0 sudo[152283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:16 compute-0 python3.9[152285]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:16 compute-0 systemd[1]: Reloading.
Dec 01 20:41:16 compute-0 ceph-mon[75880]: pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:16 compute-0 systemd-rc-local-generator[152312]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:41:16 compute-0 systemd-sysv-generator[152315]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:41:16 compute-0 sudo[152283]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:16 compute-0 sudo[152473]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-antnhxsmehctbdmaxeqgkvdvznvkmeux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621676.514643-291-128753767700182/AnsiballZ_stat.py'
Dec 01 20:41:16 compute-0 sudo[152473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:16 compute-0 python3.9[152475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:16 compute-0 sudo[152473]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:17 compute-0 sudo[152551]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcimvzctmcrzxhbhnxqppjqwanuuwcen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621676.514643-291-128753767700182/AnsiballZ_file.py'
Dec 01 20:41:17 compute-0 sudo[152551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:17 compute-0 python3.9[152553]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:17 compute-0 sudo[152551]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:17 compute-0 sudo[152703]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezgbrbzdendynoruneptqomfmhhrumww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621677.4873703-303-21017872502079/AnsiballZ_stat.py'
Dec 01 20:41:17 compute-0 sudo[152703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:17 compute-0 python3.9[152705]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:17 compute-0 sudo[152703]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:18 compute-0 ceph-mon[75880]: pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:18 compute-0 sudo[152781]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlrspjqixeztammludhkvvuffqjmhdyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621677.4873703-303-21017872502079/AnsiballZ_file.py'
Dec 01 20:41:18 compute-0 sudo[152781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:18 compute-0 python3.9[152783]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:18 compute-0 sudo[152781]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:18 compute-0 sudo[152933]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daewovnpyulvpapnwythkilgjiiyfsee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621678.4604547-315-194555374979837/AnsiballZ_systemd.py'
Dec 01 20:41:18 compute-0 sudo[152933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:18 compute-0 python3.9[152935]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:19 compute-0 systemd[1]: Reloading.
Dec 01 20:41:19 compute-0 systemd-sysv-generator[152967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:41:19 compute-0 systemd-rc-local-generator[152963]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:41:19 compute-0 systemd[1]: Starting Create netns directory...
Dec 01 20:41:19 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 01 20:41:19 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 01 20:41:19 compute-0 systemd[1]: Finished Create netns directory.
Dec 01 20:41:19 compute-0 sudo[152933]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:19 compute-0 sudo[153126]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acvtamqqzcsjhljvjeibzojlgguwnbdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621679.561697-325-168311038551300/AnsiballZ_file.py'
Dec 01 20:41:19 compute-0 sudo[153126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:19 compute-0 python3.9[153128]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:20 compute-0 sudo[153126]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:20 compute-0 ceph-mon[75880]: pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:20 compute-0 sudo[153278]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvnyemlsgnidjapuaiirmpeteazrzgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621680.179635-333-177223135695557/AnsiballZ_stat.py'
Dec 01 20:41:20 compute-0 sudo[153278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:20 compute-0 python3.9[153280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:20 compute-0 sudo[153278]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:20 compute-0 sudo[153401]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwiixlhtidxsoqbyqnxxioozwqxznqrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621680.179635-333-177223135695557/AnsiballZ_copy.py'
Dec 01 20:41:20 compute-0 sudo[153401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:21 compute-0 python3.9[153403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764621680.179635-333-177223135695557/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:21 compute-0 sudo[153401]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:21 compute-0 ceph-mon[75880]: pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:21 compute-0 sudo[153553]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lybelvlsuosjldckjyorhyijtywhbpef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621681.4274986-350-183343171421606/AnsiballZ_file.py'
Dec 01 20:41:21 compute-0 sudo[153553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:21 compute-0 python3.9[153555]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:41:21 compute-0 sudo[153553]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:22 compute-0 sudo[153705]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmmbolgidgbtchargxwlydzftsrvepoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621682.0531693-358-127266764536791/AnsiballZ_stat.py'
Dec 01 20:41:22 compute-0 sudo[153705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:22 compute-0 python3.9[153707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:41:22 compute-0 sudo[153705]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:22 compute-0 sudo[153828]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifxqerurwywmmypnfinssphnkswzhfst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621682.0531693-358-127266764536791/AnsiballZ_copy.py'
Dec 01 20:41:22 compute-0 sudo[153828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:22 compute-0 python3.9[153830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621682.0531693-358-127266764536791/.source.json _original_basename=.n747b6jm follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:22 compute-0 sudo[153828]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:23 compute-0 sudo[153980]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsgrsicuqanaxjpodmetqgzvfujbpruh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621683.095827-373-113715137445042/AnsiballZ_file.py'
Dec 01 20:41:23 compute-0 sudo[153980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:23 compute-0 python3.9[153982]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:23 compute-0 sudo[153980]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:23 compute-0 ceph-mon[75880]: pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:23 compute-0 sudo[154132]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmcsnzmekkexhqbxjmmyzferwgklkwhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621683.7041419-381-113403178018718/AnsiballZ_stat.py'
Dec 01 20:41:23 compute-0 sudo[154132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:24 compute-0 sudo[154132]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:24 compute-0 sudo[154255]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqndaxlqytlkhbcnsbiwcnvsomzttffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621683.7041419-381-113403178018718/AnsiballZ_copy.py'
Dec 01 20:41:24 compute-0 sudo[154255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:24 compute-0 sudo[154255]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:25 compute-0 sudo[154407]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjkiabbkfwhpwnwkjefblpchkrzuoax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621684.8820004-398-274834648337179/AnsiballZ_container_config_data.py'
Dec 01 20:41:25 compute-0 sudo[154407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:25 compute-0 python3.9[154409]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 01 20:41:25 compute-0 sudo[154407]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:25 compute-0 ceph-mon[75880]: pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:26 compute-0 sudo[154559]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqcoghkjenkwibzscbdvjvyynskqrsuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621685.6302788-407-69120269386699/AnsiballZ_container_config_hash.py'
Dec 01 20:41:26 compute-0 sudo[154559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:26 compute-0 python3.9[154561]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 01 20:41:26 compute-0 sudo[154559]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:26 compute-0 sudo[154711]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhydzzvxazrdtzetonwnjzvarmcqsaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621686.44104-416-146441525085985/AnsiballZ_podman_container_info.py'
Dec 01 20:41:26 compute-0 sudo[154711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:27 compute-0 python3.9[154713]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 01 20:41:27 compute-0 sudo[154711]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:27 compute-0 ceph-mon[75880]: pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:28 compute-0 sudo[154889]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dggtahwkpdvbfbfbbyimxtxtytkexlmt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621687.7108831-429-45805502601296/AnsiballZ_edpm_container_manage.py'
Dec 01 20:41:28 compute-0 sudo[154889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:28 compute-0 python3[154891]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 01 20:41:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:29 compute-0 ceph-mon[75880]: pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:31 compute-0 ceph-mon[75880]: pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:41:32
Dec 01 20:41:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:41:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:41:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images']
Dec 01 20:41:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:41:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:41:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:41:33 compute-0 ceph-mon[75880]: pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:35 compute-0 ceph-mon[75880]: pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:37 compute-0 podman[154902]: 2025-12-01 20:41:37.431437383 +0000 UTC m=+8.989733591 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 01 20:41:37 compute-0 podman[155023]: 2025-12-01 20:41:37.568803943 +0000 UTC m=+0.051211981 container create 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 01 20:41:37 compute-0 podman[155023]: 2025-12-01 20:41:37.53890546 +0000 UTC m=+0.021313518 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 01 20:41:37 compute-0 python3[154891]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 01 20:41:37 compute-0 sudo[154889]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:37 compute-0 ceph-mon[75880]: pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:38 compute-0 sudo[155211]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnqsiqluhwyowaecxmvkjrzttccwrnki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621697.8386574-437-120228498054939/AnsiballZ_stat.py'
Dec 01 20:41:38 compute-0 sudo[155211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:38 compute-0 python3.9[155213]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:41:38 compute-0 sudo[155211]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:38 compute-0 sudo[155365]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cewsjxtxksfzpaymtzvciagmcpmnvbys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621698.4786909-446-203592375675156/AnsiballZ_file.py'
Dec 01 20:41:38 compute-0 sudo[155365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:38 compute-0 python3.9[155367]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:38 compute-0 sudo[155365]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:39 compute-0 sudo[155441]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpzimeipwjahyhajddffnszhofjwkrqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621698.4786909-446-203592375675156/AnsiballZ_stat.py'
Dec 01 20:41:39 compute-0 sudo[155441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:39 compute-0 python3.9[155443]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:41:39 compute-0 sudo[155441]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:39 compute-0 sudo[155592]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zawvfnsxiqqjfesokyvvpdqaxwhlbvcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621699.330871-446-203915267342184/AnsiballZ_copy.py'
Dec 01 20:41:39 compute-0 sudo[155592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:39 compute-0 ceph-mon[75880]: pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:39 compute-0 python3.9[155594]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764621699.330871-446-203915267342184/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:41:39 compute-0 sudo[155592]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:40 compute-0 sudo[155668]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfbkaezguscopawwjhxanriiaglzbrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621699.330871-446-203915267342184/AnsiballZ_systemd.py'
Dec 01 20:41:40 compute-0 sudo[155668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:41:40 compute-0 python3.9[155670]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:41:40 compute-0 systemd[1]: Reloading.
Dec 01 20:41:40 compute-0 systemd-rc-local-generator[155695]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:41:40 compute-0 systemd-sysv-generator[155699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:41:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:40 compute-0 sudo[155668]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:41 compute-0 sudo[155779]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzccddafljuotgevkkrxkmgflccdcvlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621699.330871-446-203915267342184/AnsiballZ_systemd.py'
Dec 01 20:41:41 compute-0 sudo[155779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:41 compute-0 python3.9[155781]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:41 compute-0 systemd[1]: Reloading.
Dec 01 20:41:41 compute-0 systemd-rc-local-generator[155812]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:41:41 compute-0 systemd-sysv-generator[155815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:41:41 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 01 20:41:41 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986271b3112aff3f7b76c2f4cea4d016a9376c7d65f12a26c8ece7d10f5f65a4/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986271b3112aff3f7b76c2f4cea4d016a9376c7d65f12a26c8ece7d10f5f65a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 01 20:41:41 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66.
Dec 01 20:41:41 compute-0 podman[155823]: 2025-12-01 20:41:41.767920271 +0000 UTC m=+0.132559464 container init 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + sudo -E kolla_set_configs
Dec 01 20:41:41 compute-0 podman[155823]: 2025-12-01 20:41:41.799012048 +0000 UTC m=+0.163651211 container start 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 01 20:41:41 compute-0 edpm-start-podman-container[155823]: ovn_metadata_agent
Dec 01 20:41:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:41 compute-0 ceph-mon[75880]: pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Validating config file
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Copying service configuration files
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Writing out command to execute
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 01 20:41:41 compute-0 podman[155842]: 2025-12-01 20:41:41.876408526 +0000 UTC m=+0.133066633 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: ++ cat /run_command
Dec 01 20:41:41 compute-0 edpm-start-podman-container[155822]: Creating additional drop-in dependency for "ovn_metadata_agent" (1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66)
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + CMD=neutron-ovn-metadata-agent
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + ARGS=
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + sudo kolla_copy_cacerts
Dec 01 20:41:41 compute-0 systemd[1]: Reloading.
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + [[ ! -n '' ]]
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + . kolla_extend_start
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: Running command: 'neutron-ovn-metadata-agent'
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + umask 0022
Dec 01 20:41:41 compute-0 ovn_metadata_agent[155839]: + exec neutron-ovn-metadata-agent
Dec 01 20:41:41 compute-0 podman[155860]: 2025-12-01 20:41:41.90497409 +0000 UTC m=+0.095610541 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 01 20:41:41 compute-0 systemd-rc-local-generator[155941]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:41:41 compute-0 systemd-sysv-generator[155944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:41:42 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 01 20:41:42 compute-0 sudo[155779]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:42 compute-0 sshd-session[146882]: Connection closed by 192.168.122.30 port 34080
Dec 01 20:41:42 compute-0 sshd-session[146879]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:41:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:42 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 01 20:41:42 compute-0 systemd[1]: session-49.scope: Consumed 51.510s CPU time.
Dec 01 20:41:42 compute-0 systemd-logind[796]: Session 49 logged out. Waiting for processes to exit.
Dec 01 20:41:42 compute-0 systemd-logind[796]: Removed session 49.
Dec 01 20:41:43 compute-0 ceph-mon[75880]: pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.302 155855 INFO neutron.common.config [-] Logging enabled!
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.302 155855 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.302 155855 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.303 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.303 155855 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.303 155855 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.303 155855 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.303 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.304 155855 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.305 155855 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.306 155855 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.307 155855 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.307 155855 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.307 155855 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.307 155855 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.307 155855 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.307 155855 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.307 155855 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.308 155855 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.309 155855 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.310 155855 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.311 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.312 155855 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.313 155855 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.314 155855 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.315 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.316 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.317 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.318 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.319 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.320 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.320 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.320 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.320 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.320 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.320 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.320 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.321 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.322 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.323 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.324 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.325 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.326 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.327 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.328 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.329 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.330 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.331 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.332 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.333 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.334 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.335 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.336 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.337 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.338 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.338 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.338 155855 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.338 155855 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.347 155855 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.348 155855 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.348 155855 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.348 155855 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.348 155855 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.362 155855 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 84a1d907-d341-4608-b17a-1f738619ea16 (UUID: 84a1d907-d341-4608-b17a-1f738619ea16) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.391 155855 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.391 155855 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.391 155855 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.392 155855 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.394 155855 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.400 155855 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.405 155855 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '84a1d907-d341-4608-b17a-1f738619ea16'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd861e62af0>], external_ids={}, name=84a1d907-d341-4608-b17a-1f738619ea16, nb_cfg_timestamp=1764621649667, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.406 155855 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd861e65b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.406 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.407 155855 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.407 155855 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.407 155855 INFO oslo_service.service [-] Starting 1 workers
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.411 155855 DEBUG oslo_service.service [-] Started child 155977 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.414 155855 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp_1ckkar8/privsep.sock']
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.414 155977 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-4112026'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.436 155977 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.436 155977 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.436 155977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.439 155977 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.445 155977 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 01 20:41:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.450 155977 INFO eventlet.wsgi.server [-] (155977) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 01 20:41:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:44 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:45.112 155855 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:45.113 155855 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_1ckkar8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.986 155982 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.990 155982 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.992 155982 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:44.993 155982 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155982
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:45.115 155982 DEBUG oslo.privsep.daemon [-] privsep: reply[4abacd59-9fef-497c-8139-859909e67e8a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:45.620 155982 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:45.620 155982 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:41:45 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:45.620 155982 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:41:45 compute-0 ceph-mon[75880]: pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.160 155982 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f133c3-1dd6-4c6e-a982-6303068f0c67]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.162 155855 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=84a1d907-d341-4608-b17a-1f738619ea16, column=external_ids, values=({'neutron:ovn-metadata-id': '9ad778da-5fbd-5ba1-9892-979afc015b79'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.172 155855 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=84a1d907-d341-4608-b17a-1f738619ea16, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.177 155855 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.177 155855 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.177 155855 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.177 155855 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.177 155855 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.178 155855 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.178 155855 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.178 155855 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.178 155855 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.178 155855 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.178 155855 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.178 155855 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.179 155855 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.180 155855 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.180 155855 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.180 155855 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.180 155855 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.180 155855 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.180 155855 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.180 155855 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.181 155855 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.182 155855 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.183 155855 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.184 155855 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.184 155855 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.184 155855 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.184 155855 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.184 155855 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.184 155855 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.184 155855 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.185 155855 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.185 155855 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.185 155855 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.185 155855 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.185 155855 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.185 155855 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.185 155855 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.186 155855 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.187 155855 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.188 155855 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.188 155855 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.188 155855 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.188 155855 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.188 155855 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.188 155855 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.188 155855 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.189 155855 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.190 155855 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.191 155855 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.192 155855 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.192 155855 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.192 155855 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.192 155855 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.192 155855 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.192 155855 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.192 155855 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.193 155855 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.193 155855 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.193 155855 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.193 155855 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.193 155855 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.193 155855 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.193 155855 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.194 155855 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.195 155855 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.196 155855 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.197 155855 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.198 155855 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.199 155855 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.200 155855 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.201 155855 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.202 155855 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.203 155855 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.204 155855 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.205 155855 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.206 155855 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.207 155855 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.208 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.209 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.210 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.211 155855 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.212 155855 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.212 155855 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.212 155855 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.212 155855 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:41:46 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:41:46.212 155855 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 01 20:41:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:47 compute-0 sshd-session[155987]: Accepted publickey for zuul from 192.168.122.30 port 50590 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:41:47 compute-0 systemd-logind[796]: New session 50 of user zuul.
Dec 01 20:41:47 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 01 20:41:47 compute-0 sshd-session[155987]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:41:47 compute-0 ceph-mon[75880]: pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:48 compute-0 python3.9[156140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:41:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:49 compute-0 sudo[156294]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbgihcutiqwjsnayfeiqexbpenokzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621709.049747-34-12361198285341/AnsiballZ_command.py'
Dec 01 20:41:49 compute-0 sudo[156294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:49 compute-0 python3.9[156296]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:41:49 compute-0 sudo[156294]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:50 compute-0 sudo[156460]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrtmegqnvuzsjkwhloqxnawkrkjoqhcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621709.966985-45-130131678280615/AnsiballZ_systemd_service.py'
Dec 01 20:41:50 compute-0 sudo[156460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:51 compute-0 python3.9[156462]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:41:51 compute-0 systemd[1]: Reloading.
Dec 01 20:41:52 compute-0 systemd-rc-local-generator[156490]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:41:52 compute-0 systemd-sysv-generator[156494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:41:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:52 compute-0 ceph-mon[75880]: pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:52 compute-0 sudo[156460]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:53 compute-0 python3.9[156647]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:41:53 compute-0 network[156664]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:41:53 compute-0 network[156665]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:41:53 compute-0 network[156666]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:41:53 compute-0 ceph-mon[75880]: pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:54 compute-0 ceph-mon[75880]: pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:55 compute-0 ceph-mon[75880]: pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:56 compute-0 sudo[156927]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pksmppmgpdeiypomobtsvoydatiqdsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621716.1998894-64-54068653020/AnsiballZ_systemd_service.py'
Dec 01 20:41:56 compute-0 sudo[156927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:56 compute-0 python3.9[156929]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:56 compute-0 sudo[156927]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:41:57 compute-0 sudo[157080]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xklgiifteljpvvxopaiedhbqoxpybpvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621717.0436504-64-18088133604311/AnsiballZ_systemd_service.py'
Dec 01 20:41:57 compute-0 sudo[157080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:57 compute-0 python3.9[157082]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:57 compute-0 sudo[157080]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:57 compute-0 ceph-mon[75880]: pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:58 compute-0 sudo[157233]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqhppdawlgnxcokhjmnuopjsupgsyzer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621717.80286-64-97747683723516/AnsiballZ_systemd_service.py'
Dec 01 20:41:58 compute-0 sudo[157233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:58 compute-0 python3.9[157235]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:58 compute-0 sudo[157233]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:58 compute-0 sudo[157386]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtjcecigmthyobqqymwnfigmfmwclgke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621718.528369-64-1582720897521/AnsiballZ_systemd_service.py'
Dec 01 20:41:58 compute-0 sudo[157386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:59 compute-0 python3.9[157388]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:59 compute-0 sudo[157386]: pam_unix(sudo:session): session closed for user root
Dec 01 20:41:59 compute-0 sudo[157539]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmoakdixbpkbjlstauyqyahobwvnrtva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621719.2699194-64-224748105489353/AnsiballZ_systemd_service.py'
Dec 01 20:41:59 compute-0 sudo[157539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:41:59 compute-0 ceph-mon[75880]: pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:41:59 compute-0 python3.9[157541]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:41:59 compute-0 sudo[157539]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:00 compute-0 sudo[157692]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eizchvmqtgmdephdopdscmclpjwjbdfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621720.132821-64-228325700366838/AnsiballZ_systemd_service.py'
Dec 01 20:42:00 compute-0 sudo[157692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:00 compute-0 python3.9[157694]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:42:00 compute-0 sudo[157692]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:01 compute-0 sudo[157845]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ombtewheyfbriffgfafsnyfdxgaucjll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621720.8872786-64-82719877927422/AnsiballZ_systemd_service.py'
Dec 01 20:42:01 compute-0 sudo[157845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:01 compute-0 python3.9[157847]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:42:01 compute-0 sudo[157845]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:01 compute-0 ceph-mon[75880]: pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:02 compute-0 sudo[157998]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esudeyhmtwxanzjilnjzrteqpahrlaqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621721.8626025-116-71682168823107/AnsiballZ_file.py'
Dec 01 20:42:02 compute-0 sudo[157998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:02 compute-0 python3.9[158000]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:02 compute-0 sudo[157998]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:03 compute-0 sudo[158150]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcjlxewwmlyksjgtstzpcuihzbgszsyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621722.773651-116-203863488813871/AnsiballZ_file.py'
Dec 01 20:42:03 compute-0 sudo[158150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:03 compute-0 python3.9[158152]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:03 compute-0 sudo[158150]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:42:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:42:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:42:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:42:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:42:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:42:03 compute-0 sudo[158302]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzayyxxfbuvpcvgeqrdbevayjamkuebk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621723.3398664-116-21678955805667/AnsiballZ_file.py'
Dec 01 20:42:03 compute-0 sudo[158302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:03 compute-0 ceph-mon[75880]: pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:03 compute-0 python3.9[158304]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:03 compute-0 sudo[158302]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:04 compute-0 sudo[158454]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhbgblqkmfvwwxntfviwqtoavbetdcpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621724.010459-116-117425929645510/AnsiballZ_file.py'
Dec 01 20:42:04 compute-0 sudo[158454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:04 compute-0 python3.9[158456]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:04 compute-0 sudo[158454]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:04 compute-0 sudo[158606]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuttuwkkzwetabiivlxsrkmnwrnwlpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621724.6377022-116-119952301193483/AnsiballZ_file.py'
Dec 01 20:42:04 compute-0 sudo[158606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:05 compute-0 python3.9[158608]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:05 compute-0 sudo[158606]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:05 compute-0 sudo[158758]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxpugavfnsyodjbvbpyamwhlekfqvqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621725.2454991-116-138843866732921/AnsiballZ_file.py'
Dec 01 20:42:05 compute-0 sudo[158758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:05 compute-0 python3.9[158760]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:05 compute-0 sudo[158758]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:05 compute-0 ceph-mon[75880]: pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:06 compute-0 sudo[158910]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycyindserrdxcjalngwqrkskxbxdjwzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621725.8655891-116-129456986690227/AnsiballZ_file.py'
Dec 01 20:42:06 compute-0 sudo[158910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:06 compute-0 python3.9[158912]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:06 compute-0 sudo[158910]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:06 compute-0 sudo[159062]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btbaexakkjhbbzeaoupuhhglensgmxxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621726.5769715-166-148137218685355/AnsiballZ_file.py'
Dec 01 20:42:06 compute-0 sudo[159062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:07 compute-0 sudo[159065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:42:07 compute-0 sudo[159065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:07 compute-0 sudo[159065]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:07 compute-0 sudo[159090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 01 20:42:07 compute-0 sudo[159090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:07 compute-0 python3.9[159064]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:07 compute-0 sudo[159062]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:07 compute-0 sudo[159090]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:42:07 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:42:07 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:07 compute-0 sudo[159237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:42:07 compute-0 sudo[159237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:07 compute-0 sudo[159237]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:07 compute-0 sudo[159332]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccartnyzvwnldhurzkfjhavyxglufnpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621727.350449-166-273386231899163/AnsiballZ_file.py'
Dec 01 20:42:07 compute-0 sudo[159332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:07 compute-0 sudo[159283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:42:07 compute-0 sudo[159283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:07 compute-0 python3.9[159335]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:07 compute-0 sudo[159332]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:07 compute-0 ceph-mon[75880]: pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:08 compute-0 sudo[159283]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:42:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:42:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:42:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:42:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:42:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:42:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:42:08 compute-0 sudo[159517]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huotqxogghwmjtbcekhlxxsazgdnmwly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621727.9721346-166-122161282903242/AnsiballZ_file.py'
Dec 01 20:42:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:42:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:42:08 compute-0 sudo[159517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:42:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:42:08 compute-0 sudo[159520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:42:08 compute-0 sudo[159520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:08 compute-0 sudo[159520]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:08 compute-0 sudo[159545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:42:08 compute-0 sudo[159545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:08 compute-0 python3.9[159519]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:08 compute-0 sudo[159517]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:08 compute-0 podman[159607]: 2025-12-01 20:42:08.610707775 +0000 UTC m=+0.019340162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:42:08 compute-0 sudo[159745]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shmdazdhtgxzzaghxndtykzualfaehdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621728.6176746-166-11060161586037/AnsiballZ_file.py'
Dec 01 20:42:08 compute-0 sudo[159745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:09 compute-0 podman[159607]: 2025-12-01 20:42:09.656807743 +0000 UTC m=+1.065440110 container create c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:42:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:42:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:42:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:42:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:42:09 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:42:09 compute-0 python3.9[159747]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:09 compute-0 sudo[159745]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:09 compute-0 systemd[1]: Started libpod-conmon-c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf.scope.
Dec 01 20:42:09 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:42:09 compute-0 podman[159607]: 2025-12-01 20:42:09.777769345 +0000 UTC m=+1.186401732 container init c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:42:09 compute-0 podman[159607]: 2025-12-01 20:42:09.785960596 +0000 UTC m=+1.194592963 container start c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:42:09 compute-0 podman[159607]: 2025-12-01 20:42:09.791680221 +0000 UTC m=+1.200312578 container attach c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lumiere, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:42:09 compute-0 cranky_lumiere[159751]: 167 167
Dec 01 20:42:09 compute-0 systemd[1]: libpod-c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf.scope: Deactivated successfully.
Dec 01 20:42:09 compute-0 podman[159607]: 2025-12-01 20:42:09.800760009 +0000 UTC m=+1.209392386 container died c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lumiere, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 01 20:42:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-036c9be7078de9da27142012e25238c36e3e5a8f4d6e5dd991ee2c8024fded99-merged.mount: Deactivated successfully.
Dec 01 20:42:09 compute-0 podman[159607]: 2025-12-01 20:42:09.852414251 +0000 UTC m=+1.261046618 container remove c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 01 20:42:09 compute-0 systemd[1]: libpod-conmon-c3cfd8298b3d3a7317076b3a8568f431c5921b1f4802823d5177b57fdb820bdf.scope: Deactivated successfully.
Dec 01 20:42:10 compute-0 podman[159863]: 2025-12-01 20:42:10.001273506 +0000 UTC m=+0.044702969 container create d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 01 20:42:10 compute-0 systemd[1]: Started libpod-conmon-d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee.scope.
Dec 01 20:42:10 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d6156462ff7a297eab0b354b11a2a605041de2620bfe2b4c302c1f3c780b6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:10 compute-0 podman[159863]: 2025-12-01 20:42:09.980568753 +0000 UTC m=+0.023998216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d6156462ff7a297eab0b354b11a2a605041de2620bfe2b4c302c1f3c780b6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d6156462ff7a297eab0b354b11a2a605041de2620bfe2b4c302c1f3c780b6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d6156462ff7a297eab0b354b11a2a605041de2620bfe2b4c302c1f3c780b6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d6156462ff7a297eab0b354b11a2a605041de2620bfe2b4c302c1f3c780b6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:10 compute-0 podman[159863]: 2025-12-01 20:42:10.091390054 +0000 UTC m=+0.134819517 container init d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_turing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 01 20:42:10 compute-0 sudo[159943]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iholgbxtoijljeutrereclyxmphvdpha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621729.8524096-166-79750851727199/AnsiballZ_file.py'
Dec 01 20:42:10 compute-0 sudo[159943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:10 compute-0 podman[159863]: 2025-12-01 20:42:10.09910424 +0000 UTC m=+0.142533703 container start d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:42:10 compute-0 podman[159863]: 2025-12-01 20:42:10.1030147 +0000 UTC m=+0.146444193 container attach d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 01 20:42:10 compute-0 python3.9[159946]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:10 compute-0 sudo[159943]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:10 compute-0 laughing_turing[159914]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:42:10 compute-0 laughing_turing[159914]: --> All data devices are unavailable
Dec 01 20:42:10 compute-0 systemd[1]: libpod-d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee.scope: Deactivated successfully.
Dec 01 20:42:10 compute-0 podman[159863]: 2025-12-01 20:42:10.578194144 +0000 UTC m=+0.621623647 container died d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 01 20:42:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5d6156462ff7a297eab0b354b11a2a605041de2620bfe2b4c302c1f3c780b6e-merged.mount: Deactivated successfully.
Dec 01 20:42:10 compute-0 sudo[160125]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krzitgtftheknpsxljwwirvyupajjshn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621730.393695-166-97874772776748/AnsiballZ_file.py'
Dec 01 20:42:10 compute-0 podman[159863]: 2025-12-01 20:42:10.637114037 +0000 UTC m=+0.680543500 container remove d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:42:10 compute-0 sudo[160125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:10 compute-0 systemd[1]: libpod-conmon-d584068c18eece0418ad2ba80d19043ce63f5608c20f68b399ef24932dfd54ee.scope: Deactivated successfully.
Dec 01 20:42:10 compute-0 ceph-mon[75880]: pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:10 compute-0 sudo[159545]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:10 compute-0 sudo[160130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:42:10 compute-0 sudo[160130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:10 compute-0 sudo[160130]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:10 compute-0 sudo[160155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:42:10 compute-0 sudo[160155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:10 compute-0 python3.9[160129]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:10 compute-0 sudo[160125]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:11 compute-0 podman[160238]: 2025-12-01 20:42:11.05934779 +0000 UTC m=+0.037948512 container create dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:42:11 compute-0 systemd[1]: Started libpod-conmon-dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396.scope.
Dec 01 20:42:11 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:42:11 compute-0 podman[160238]: 2025-12-01 20:42:11.043065322 +0000 UTC m=+0.021666064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:42:11 compute-0 podman[160238]: 2025-12-01 20:42:11.139306668 +0000 UTC m=+0.117907410 container init dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:42:11 compute-0 podman[160238]: 2025-12-01 20:42:11.146961582 +0000 UTC m=+0.125562304 container start dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 01 20:42:11 compute-0 podman[160238]: 2025-12-01 20:42:11.15080128 +0000 UTC m=+0.129402002 container attach dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 01 20:42:11 compute-0 flamboyant_hofstadter[160291]: 167 167
Dec 01 20:42:11 compute-0 systemd[1]: libpod-dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396.scope: Deactivated successfully.
Dec 01 20:42:11 compute-0 podman[160238]: 2025-12-01 20:42:11.153519233 +0000 UTC m=+0.132119965 container died dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc73484b4bab8edd1182266f4f91e1c0f177dfe22df134d5f68e1e32a078ffa2-merged.mount: Deactivated successfully.
Dec 01 20:42:11 compute-0 podman[160238]: 2025-12-01 20:42:11.204556935 +0000 UTC m=+0.183157657 container remove dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:42:11 compute-0 systemd[1]: libpod-conmon-dda50968b19b3f48d81645c5a25f1234e119472031c5433890f20e243294a396.scope: Deactivated successfully.
Dec 01 20:42:11 compute-0 sudo[160376]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpivmemfpbnsolsqlhsjjdpioiugysqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621731.021493-166-18899848152138/AnsiballZ_file.py'
Dec 01 20:42:11 compute-0 sudo[160376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:11 compute-0 podman[160384]: 2025-12-01 20:42:11.375104375 +0000 UTC m=+0.039301024 container create 4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_swanson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:42:11 compute-0 systemd[1]: Started libpod-conmon-4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7.scope.
Dec 01 20:42:11 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57545c55a8d3922fda0c5da2794f4097487b201f6f3d2b9ad6758e58338e9514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57545c55a8d3922fda0c5da2794f4097487b201f6f3d2b9ad6758e58338e9514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57545c55a8d3922fda0c5da2794f4097487b201f6f3d2b9ad6758e58338e9514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57545c55a8d3922fda0c5da2794f4097487b201f6f3d2b9ad6758e58338e9514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:11 compute-0 podman[160384]: 2025-12-01 20:42:11.447579483 +0000 UTC m=+0.111776182 container init 4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:42:11 compute-0 podman[160384]: 2025-12-01 20:42:11.358052263 +0000 UTC m=+0.022248932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:42:11 compute-0 podman[160384]: 2025-12-01 20:42:11.457079614 +0000 UTC m=+0.121276263 container start 4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:42:11 compute-0 podman[160384]: 2025-12-01 20:42:11.460389455 +0000 UTC m=+0.124586164 container attach 4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_swanson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:42:11 compute-0 python3.9[160378]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:42:11 compute-0 sudo[160376]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:11 compute-0 ceph-mon[75880]: pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:11 compute-0 priceless_swanson[160401]: {
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:     "0": [
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:         {
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "devices": [
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "/dev/loop3"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             ],
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_name": "ceph_lv0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_size": "21470642176",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "name": "ceph_lv0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "tags": {
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cluster_name": "ceph",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.crush_device_class": "",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.encrypted": "0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.objectstore": "bluestore",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osd_id": "0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.type": "block",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.vdo": "0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.with_tpm": "0"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             },
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "type": "block",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "vg_name": "ceph_vg0"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:         }
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:     ],
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:     "1": [
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:         {
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "devices": [
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "/dev/loop4"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             ],
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_name": "ceph_lv1",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_size": "21470642176",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "name": "ceph_lv1",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "tags": {
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cluster_name": "ceph",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.crush_device_class": "",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.encrypted": "0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.objectstore": "bluestore",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osd_id": "1",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.type": "block",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.vdo": "0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.with_tpm": "0"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             },
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "type": "block",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "vg_name": "ceph_vg1"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:         }
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:     ],
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:     "2": [
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:         {
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "devices": [
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "/dev/loop5"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             ],
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_name": "ceph_lv2",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_size": "21470642176",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "name": "ceph_lv2",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "tags": {
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.cluster_name": "ceph",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.crush_device_class": "",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.encrypted": "0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.objectstore": "bluestore",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osd_id": "2",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.type": "block",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.vdo": "0",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:                 "ceph.with_tpm": "0"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             },
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "type": "block",
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:             "vg_name": "ceph_vg2"
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:         }
Dec 01 20:42:11 compute-0 priceless_swanson[160401]:     ]
Dec 01 20:42:11 compute-0 priceless_swanson[160401]: }
Dec 01 20:42:11 compute-0 systemd[1]: libpod-4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7.scope: Deactivated successfully.
Dec 01 20:42:11 compute-0 podman[160384]: 2025-12-01 20:42:11.741451428 +0000 UTC m=+0.405648077 container died 4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_swanson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 01 20:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-57545c55a8d3922fda0c5da2794f4097487b201f6f3d2b9ad6758e58338e9514-merged.mount: Deactivated successfully.
Dec 01 20:42:11 compute-0 podman[160384]: 2025-12-01 20:42:11.792089188 +0000 UTC m=+0.456285837 container remove 4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_swanson, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:42:11 compute-0 systemd[1]: libpod-conmon-4694b7327250d3701220e4b0418dab4831c3804d6a55750fec552292b823e8f7.scope: Deactivated successfully.
Dec 01 20:42:11 compute-0 sudo[160155]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:11 compute-0 sudo[160519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:42:11 compute-0 sudo[160519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:11 compute-0 sudo[160519]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:11 compute-0 sudo[160580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:42:11 compute-0 sudo[160630]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqbkwyzybvqejigbtqhgjqfrfydnmdsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621731.72063-217-8746330758185/AnsiballZ_command.py'
Dec 01 20:42:11 compute-0 sudo[160630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:11 compute-0 sudo[160580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:12 compute-0 podman[160567]: 2025-12-01 20:42:12.006467009 +0000 UTC m=+0.085761216 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:42:12 compute-0 python3.9[160638]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:12 compute-0 sudo[160630]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:12 compute-0 podman[160666]: 2025-12-01 20:42:12.248064724 +0000 UTC m=+0.036759777 container create 406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lewin, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 01 20:42:12 compute-0 systemd[1]: Started libpod-conmon-406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4.scope.
Dec 01 20:42:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:42:12 compute-0 podman[160666]: 2025-12-01 20:42:12.32015748 +0000 UTC m=+0.108852553 container init 406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lewin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 01 20:42:12 compute-0 podman[160666]: 2025-12-01 20:42:12.327237837 +0000 UTC m=+0.115932890 container start 406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:42:12 compute-0 podman[160666]: 2025-12-01 20:42:12.232534018 +0000 UTC m=+0.021229071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:42:12 compute-0 podman[160666]: 2025-12-01 20:42:12.332021593 +0000 UTC m=+0.120716676 container attach 406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:42:12 compute-0 nifty_lewin[160713]: 167 167
Dec 01 20:42:12 compute-0 systemd[1]: libpod-406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4.scope: Deactivated successfully.
Dec 01 20:42:12 compute-0 podman[160666]: 2025-12-01 20:42:12.333493958 +0000 UTC m=+0.122189011 container died 406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lewin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:42:12 compute-0 podman[160701]: 2025-12-01 20:42:12.344414902 +0000 UTC m=+0.062899876 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 01 20:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af0188dd4f69804afc1abeb15d15bc1aa9cb1b05da560354c6fbb266bcc9d94-merged.mount: Deactivated successfully.
Dec 01 20:42:12 compute-0 podman[160666]: 2025-12-01 20:42:12.373886505 +0000 UTC m=+0.162581568 container remove 406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 01 20:42:12 compute-0 systemd[1]: libpod-conmon-406b55c5ef109ad4d6e96b3b9d900c022c7ae4c2103767bb48e46b187e67d4f4.scope: Deactivated successfully.
Dec 01 20:42:12 compute-0 podman[160797]: 2025-12-01 20:42:12.545644522 +0000 UTC m=+0.050870278 container create ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kowalevski, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:42:12 compute-0 systemd[1]: Started libpod-conmon-ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d.scope.
Dec 01 20:42:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:12 compute-0 podman[160797]: 2025-12-01 20:42:12.518880372 +0000 UTC m=+0.024106178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:42:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688b8ae0677010ab52c7b7bda6fd77eea1e6d4e62ea37ae537d6c213ad8dc6b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688b8ae0677010ab52c7b7bda6fd77eea1e6d4e62ea37ae537d6c213ad8dc6b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688b8ae0677010ab52c7b7bda6fd77eea1e6d4e62ea37ae537d6c213ad8dc6b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688b8ae0677010ab52c7b7bda6fd77eea1e6d4e62ea37ae537d6c213ad8dc6b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:42:12 compute-0 podman[160797]: 2025-12-01 20:42:12.634633205 +0000 UTC m=+0.139859011 container init ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:42:12 compute-0 podman[160797]: 2025-12-01 20:42:12.642977831 +0000 UTC m=+0.148203607 container start ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kowalevski, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:42:12 compute-0 podman[160797]: 2025-12-01 20:42:12.64690615 +0000 UTC m=+0.152131906 container attach ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kowalevski, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:42:12 compute-0 python3.9[160892]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 01 20:42:13 compute-0 lvm[161071]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:42:13 compute-0 lvm[161069]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:42:13 compute-0 lvm[161069]: VG ceph_vg0 finished
Dec 01 20:42:13 compute-0 lvm[161071]: VG ceph_vg1 finished
Dec 01 20:42:13 compute-0 lvm[161091]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:42:13 compute-0 lvm[161091]: VG ceph_vg2 finished
Dec 01 20:42:13 compute-0 sudo[161120]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxibdqdfambqsopxzeofpxmfowuaeun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621733.1214876-235-72149172871060/AnsiballZ_systemd_service.py'
Dec 01 20:42:13 compute-0 sudo[161120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:13 compute-0 optimistic_kowalevski[160834]: {}
Dec 01 20:42:13 compute-0 systemd[1]: libpod-ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d.scope: Deactivated successfully.
Dec 01 20:42:13 compute-0 systemd[1]: libpod-ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d.scope: Consumed 1.217s CPU time.
Dec 01 20:42:13 compute-0 podman[160797]: 2025-12-01 20:42:13.421155708 +0000 UTC m=+0.926381484 container died ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kowalevski, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:42:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-688b8ae0677010ab52c7b7bda6fd77eea1e6d4e62ea37ae537d6c213ad8dc6b3-merged.mount: Deactivated successfully.
Dec 01 20:42:13 compute-0 podman[160797]: 2025-12-01 20:42:13.468142396 +0000 UTC m=+0.973368142 container remove ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:42:13 compute-0 systemd[1]: libpod-conmon-ccd9c8c1c036fdbd8a23b7ee6e27fafd6d2733b95efaa4039144cd1f54f88f6d.scope: Deactivated successfully.
Dec 01 20:42:13 compute-0 sudo[160580]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:42:13 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:42:13 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:13 compute-0 sudo[161137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:42:13 compute-0 sudo[161137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:42:13 compute-0 sudo[161137]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:13 compute-0 python3.9[161123]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:42:13 compute-0 systemd[1]: Reloading.
Dec 01 20:42:13 compute-0 ceph-mon[75880]: pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:13 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:13 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:42:13 compute-0 systemd-rc-local-generator[161190]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:42:13 compute-0 systemd-sysv-generator[161193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:42:13 compute-0 sudo[161120]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:14 compute-0 sudo[161347]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rguqoxutolliqrhghpunzrkalxefervj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621734.1228092-243-40425637920103/AnsiballZ_command.py'
Dec 01 20:42:14 compute-0 sudo[161347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:14 compute-0 python3.9[161349]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:14 compute-0 sudo[161347]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:15 compute-0 sudo[161500]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvulrawlhbijonclqclimasqpfkxigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621734.8277926-243-382216114643/AnsiballZ_command.py'
Dec 01 20:42:15 compute-0 sudo[161500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:15 compute-0 python3.9[161502]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:15 compute-0 sudo[161500]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:15 compute-0 sudo[161653]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-latqhdkawdxwecnxbomfsdutsxemzdtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621735.458646-243-217132961953347/AnsiballZ_command.py'
Dec 01 20:42:15 compute-0 sudo[161653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:15 compute-0 ceph-mon[75880]: pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:15 compute-0 python3.9[161655]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:15 compute-0 sudo[161653]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:16 compute-0 sudo[161806]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clqxujyysowcjlvpgugrsyomtnbzyawu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621736.0521536-243-124133472792111/AnsiballZ_command.py'
Dec 01 20:42:16 compute-0 sudo[161806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:16 compute-0 python3.9[161808]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:16 compute-0 sudo[161806]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:42:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 16.42 MB, 0.03 MB/s
                                           Interval WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:42:16 compute-0 sudo[161959]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdirjjgiwdctrcbstubofwtakdhclmrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621736.6382923-243-86740973960401/AnsiballZ_command.py'
Dec 01 20:42:16 compute-0 sudo[161959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:17 compute-0 python3.9[161961]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:17 compute-0 sudo[161959]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:17 compute-0 sudo[162112]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukonlcvscwjkvstxccxgmbqrjikzixpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621737.2358148-243-154492633075926/AnsiballZ_command.py'
Dec 01 20:42:17 compute-0 sudo[162112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:17 compute-0 python3.9[162114]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:17 compute-0 ceph-mon[75880]: pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:17 compute-0 sudo[162112]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:18 compute-0 sudo[162265]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxewuulrnafcziajypmuvvcwleissvfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621737.898689-243-85864735976346/AnsiballZ_command.py'
Dec 01 20:42:18 compute-0 sudo[162265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:18 compute-0 python3.9[162267]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:42:18 compute-0 sudo[162265]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:19 compute-0 sudo[162418]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnnxzzeeivbpshbrrgerjvsqtblzpsgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621738.8105593-297-34419299607177/AnsiballZ_getent.py'
Dec 01 20:42:19 compute-0 sudo[162418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:19 compute-0 python3.9[162420]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 01 20:42:19 compute-0 sudo[162418]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:19 compute-0 ceph-mon[75880]: pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:19 compute-0 sudo[162571]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvbapxdgfoashmfccfzjqpdulkbkbrcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621739.611263-305-33085913632425/AnsiballZ_group.py'
Dec 01 20:42:19 compute-0 sudo[162571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:20 compute-0 python3.9[162573]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 01 20:42:20 compute-0 groupadd[162574]: group added to /etc/group: name=libvirt, GID=42473
Dec 01 20:42:20 compute-0 groupadd[162574]: group added to /etc/gshadow: name=libvirt
Dec 01 20:42:20 compute-0 groupadd[162574]: new group: name=libvirt, GID=42473
Dec 01 20:42:20 compute-0 sudo[162571]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:20 compute-0 sudo[162729]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlgvjykiaitawedbrjjtopnzxvmorpgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621740.426047-313-177540466869420/AnsiballZ_user.py'
Dec 01 20:42:20 compute-0 sudo[162729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:21 compute-0 python3.9[162731]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 01 20:42:21 compute-0 useradd[162733]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/1
Dec 01 20:42:21 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:42:21 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:42:21 compute-0 sudo[162729]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:42:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 16.59 MB, 0.03 MB/s
                                           Interval WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:42:21 compute-0 sudo[162890]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmavopkbjwqtfyqinkfbrlugotluxevo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621741.5814211-324-112698777656007/AnsiballZ_setup.py'
Dec 01 20:42:21 compute-0 sudo[162890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:22 compute-0 ceph-mon[75880]: pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:22 compute-0 python3.9[162892]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:42:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:22 compute-0 sudo[162890]: pam_unix(sudo:session): session closed for user root
Dec 01 20:42:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:22 compute-0 sudo[162974]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tunitpgwvmnghjclzvqvukyjxbomnnei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621741.5814211-324-112698777656007/AnsiballZ_dnf.py'
Dec 01 20:42:22 compute-0 sudo[162974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:42:23 compute-0 python3.9[162976]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:42:23 compute-0 ceph-mon[75880]: pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:25 compute-0 ceph-mon[75880]: pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:42:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 16.17 MB, 0.03 MB/s
                                           Interval WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:42:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:28 compute-0 ceph-mgr[76174]: [devicehealth INFO root] Check health
Dec 01 20:42:28 compute-0 ceph-mon[75880]: pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:29 compute-0 ceph-mon[75880]: pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:32 compute-0 ceph-mon[75880]: pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:42:32
Dec 01 20:42:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:42:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:42:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'images', 'cephfs.cephfs.data', 'volumes']
Dec 01 20:42:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:42:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:42:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:42:33 compute-0 ceph-mon[75880]: pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:35 compute-0 ceph-mon[75880]: pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:37 compute-0 ceph-mon[75880]: pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:39 compute-0 ceph-mon[75880]: pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:42:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:41 compute-0 ceph-mon[75880]: pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:43 compute-0 podman[163156]: 2025-12-01 20:42:43.109485682 +0000 UTC m=+0.059102250 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 01 20:42:43 compute-0 podman[163157]: 2025-12-01 20:42:43.13132285 +0000 UTC m=+0.080327609 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:42:43 compute-0 ceph-mon[75880]: pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:42:44.340 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:42:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:42:44.340 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:42:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:42:44.340 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:42:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:45 compute-0 ceph-mon[75880]: pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:47 compute-0 ceph-mon[75880]: pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:49 compute-0 ceph-mon[75880]: pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:51 compute-0 ceph-mon[75880]: pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:53 compute-0 ceph-mon[75880]: pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:55 compute-0 ceph-mon[75880]: pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:55 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Dec 01 20:42:55 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:42:55 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 01 20:42:55 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:42:55 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:42:55 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:42:55 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:42:55 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:42:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:42:57 compute-0 ceph-mon[75880]: pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:42:59 compute-0 ceph-mon[75880]: pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:01 compute-0 ceph-mon[75880]: pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:43:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:43:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:43:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:43:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:43:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:43:03 compute-0 ceph-mon[75880]: pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:05 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Dec 01 20:43:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:43:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 01 20:43:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:43:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:43:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:43:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:43:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:43:05 compute-0 ceph-mon[75880]: pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:07 compute-0 ceph-mon[75880]: pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.866858) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621787866921, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 251, "total_data_size": 1664985, "memory_usage": 1692800, "flush_reason": "Manual Compaction"}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621787880304, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1622542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7566, "largest_seqno": 9080, "table_properties": {"data_size": 1615575, "index_size": 4042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13718, "raw_average_key_size": 19, "raw_value_size": 1601625, "raw_average_value_size": 2218, "num_data_blocks": 190, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621622, "oldest_key_time": 1764621622, "file_creation_time": 1764621787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 13486 microseconds, and 4757 cpu microseconds.
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.880353) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1622542 bytes OK
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.880373) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.882008) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.882027) EVENT_LOG_v1 {"time_micros": 1764621787882022, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.882046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1658401, prev total WAL file size 1658401, number of live WAL files 2.
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.882833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1584KB)], [23(4382KB)]
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621787882901, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 6110691, "oldest_snapshot_seqno": -1}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2822 keys, 4837940 bytes, temperature: kUnknown
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621787925644, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4837940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4816427, "index_size": 13340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7109, "raw_key_size": 65477, "raw_average_key_size": 23, "raw_value_size": 4763212, "raw_average_value_size": 1687, "num_data_blocks": 596, "num_entries": 2822, "num_filter_entries": 2822, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764621787, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.925910) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4837940 bytes
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.929690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.7 rd, 113.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 4.3 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 3336, records dropped: 514 output_compression: NoCompression
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.929712) EVENT_LOG_v1 {"time_micros": 1764621787929702, "job": 8, "event": "compaction_finished", "compaction_time_micros": 42817, "compaction_time_cpu_micros": 13989, "output_level": 6, "num_output_files": 1, "total_output_size": 4837940, "num_input_records": 3336, "num_output_records": 2822, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621787930113, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764621787931017, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.882746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.931080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.931089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.931092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.931095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:43:07 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:43:07.931098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:43:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:09 compute-0 ceph-mon[75880]: pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:11 compute-0 ceph-mon[75880]: pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:13 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 01 20:43:13 compute-0 sudo[163218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:43:13 compute-0 sudo[163218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:13 compute-0 sudo[163218]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:13 compute-0 sudo[163255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:43:13 compute-0 sudo[163255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:13 compute-0 podman[163242]: 2025-12-01 20:43:13.777917509 +0000 UTC m=+0.083431277 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 01 20:43:13 compute-0 podman[163243]: 2025-12-01 20:43:13.809924639 +0000 UTC m=+0.115100946 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:43:13 compute-0 ceph-mon[75880]: pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:14 compute-0 podman[163352]: 2025-12-01 20:43:14.179732836 +0000 UTC m=+0.076343352 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:43:14 compute-0 podman[163352]: 2025-12-01 20:43:14.308681628 +0000 UTC m=+0.205292134 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 01 20:43:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:14 compute-0 sudo[163255]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:43:14 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:43:14 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:15 compute-0 sudo[163522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:43:15 compute-0 sudo[163522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:15 compute-0 sudo[163522]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:15 compute-0 sudo[163547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:43:15 compute-0 sudo[163547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:15 compute-0 sudo[163547]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:43:15 compute-0 sudo[163604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:43:15 compute-0 sudo[163604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:15 compute-0 sudo[163604]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:15 compute-0 sudo[163629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:43:15 compute-0 sudo[163629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:15 compute-0 ceph-mon[75880]: pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:43:15 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:43:16 compute-0 podman[163666]: 2025-12-01 20:43:16.018073173 +0000 UTC m=+0.051977032 container create b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 01 20:43:16 compute-0 systemd[1]: Started libpod-conmon-b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2.scope.
Dec 01 20:43:16 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:43:16 compute-0 podman[163666]: 2025-12-01 20:43:15.988396766 +0000 UTC m=+0.022300685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:43:16 compute-0 podman[163666]: 2025-12-01 20:43:16.093653209 +0000 UTC m=+0.127557048 container init b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 01 20:43:16 compute-0 podman[163666]: 2025-12-01 20:43:16.100055112 +0000 UTC m=+0.133958941 container start b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:43:16 compute-0 keen_brahmagupta[163683]: 167 167
Dec 01 20:43:16 compute-0 systemd[1]: libpod-b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2.scope: Deactivated successfully.
Dec 01 20:43:16 compute-0 podman[163666]: 2025-12-01 20:43:16.105227465 +0000 UTC m=+0.139131334 container attach b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:43:16 compute-0 podman[163666]: 2025-12-01 20:43:16.105576716 +0000 UTC m=+0.139480575 container died b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec 01 20:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4401d79fe666ce17b777241f95cf0ac2c46f62a49d4bd525ed497ba0e4d0ba8-merged.mount: Deactivated successfully.
Dec 01 20:43:16 compute-0 podman[163666]: 2025-12-01 20:43:16.149637437 +0000 UTC m=+0.183541256 container remove b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:43:16 compute-0 systemd[1]: libpod-conmon-b973d4b1fc0044ac2c187049fd00e186ec16d99060ec40e282e8b88ab56acbf2.scope: Deactivated successfully.
Dec 01 20:43:16 compute-0 podman[163709]: 2025-12-01 20:43:16.317158947 +0000 UTC m=+0.058841708 container create cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:43:16 compute-0 systemd[1]: Started libpod-conmon-cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde.scope.
Dec 01 20:43:16 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99f0153e767ce26dd760feb1ff5a1bc9682d6fdad043a41331d23cf8b036c52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99f0153e767ce26dd760feb1ff5a1bc9682d6fdad043a41331d23cf8b036c52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99f0153e767ce26dd760feb1ff5a1bc9682d6fdad043a41331d23cf8b036c52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99f0153e767ce26dd760feb1ff5a1bc9682d6fdad043a41331d23cf8b036c52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a99f0153e767ce26dd760feb1ff5a1bc9682d6fdad043a41331d23cf8b036c52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:16 compute-0 podman[163709]: 2025-12-01 20:43:16.283980419 +0000 UTC m=+0.025663160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:43:16 compute-0 podman[163709]: 2025-12-01 20:43:16.395274644 +0000 UTC m=+0.136957405 container init cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:43:16 compute-0 podman[163709]: 2025-12-01 20:43:16.402335136 +0000 UTC m=+0.144017857 container start cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:43:16 compute-0 podman[163709]: 2025-12-01 20:43:16.407968934 +0000 UTC m=+0.149651715 container attach cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:43:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:16 compute-0 goofy_noyce[163727]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:43:16 compute-0 goofy_noyce[163727]: --> All data devices are unavailable
Dec 01 20:43:16 compute-0 systemd[1]: libpod-cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde.scope: Deactivated successfully.
Dec 01 20:43:16 compute-0 podman[163709]: 2025-12-01 20:43:16.87115927 +0000 UTC m=+0.612841991 container died cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a99f0153e767ce26dd760feb1ff5a1bc9682d6fdad043a41331d23cf8b036c52-merged.mount: Deactivated successfully.
Dec 01 20:43:17 compute-0 podman[163709]: 2025-12-01 20:43:17.013544566 +0000 UTC m=+0.755227287 container remove cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_noyce, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:43:17 compute-0 systemd[1]: libpod-conmon-cb67a4cda1a8a997271b9ec69c0808d989736cf3749cd8c9b8ff05352b065fde.scope: Deactivated successfully.
Dec 01 20:43:17 compute-0 sudo[163629]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:17 compute-0 sudo[163813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:43:17 compute-0 sudo[163813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:17 compute-0 sudo[163813]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:17 compute-0 sudo[163883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:43:17 compute-0 sudo[163883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:17 compute-0 podman[164095]: 2025-12-01 20:43:17.443565454 +0000 UTC m=+0.035884254 container create 85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_galois, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:43:17 compute-0 systemd[1]: Started libpod-conmon-85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c.scope.
Dec 01 20:43:17 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:43:17 compute-0 podman[164095]: 2025-12-01 20:43:17.508865256 +0000 UTC m=+0.101184056 container init 85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_galois, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:43:17 compute-0 podman[164095]: 2025-12-01 20:43:17.515245558 +0000 UTC m=+0.107564358 container start 85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:43:17 compute-0 podman[164095]: 2025-12-01 20:43:17.518726427 +0000 UTC m=+0.111045257 container attach 85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:43:17 compute-0 angry_galois[164166]: 167 167
Dec 01 20:43:17 compute-0 systemd[1]: libpod-85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c.scope: Deactivated successfully.
Dec 01 20:43:17 compute-0 podman[164095]: 2025-12-01 20:43:17.519877264 +0000 UTC m=+0.112196074 container died 85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:43:17 compute-0 podman[164095]: 2025-12-01 20:43:17.426852946 +0000 UTC m=+0.019171796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f95df5e274dcc761d88dc5bef48c204897788ed90f27c57ea2ac22a04c6a2fce-merged.mount: Deactivated successfully.
Dec 01 20:43:17 compute-0 podman[164095]: 2025-12-01 20:43:17.551327067 +0000 UTC m=+0.143645867 container remove 85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_galois, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:43:17 compute-0 systemd[1]: libpod-conmon-85fabba3cc47c45515532daf18fc3ca0426544325127cddcea188fa053ee174c.scope: Deactivated successfully.
Dec 01 20:43:17 compute-0 podman[164317]: 2025-12-01 20:43:17.690309855 +0000 UTC m=+0.034762939 container create fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:43:17 compute-0 systemd[1]: Started libpod-conmon-fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac.scope.
Dec 01 20:43:17 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f464cd972ee0da467f18795c4e0ff601847745c764c6d4262198ad350957a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f464cd972ee0da467f18795c4e0ff601847745c764c6d4262198ad350957a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f464cd972ee0da467f18795c4e0ff601847745c764c6d4262198ad350957a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d81f464cd972ee0da467f18795c4e0ff601847745c764c6d4262198ad350957a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:17 compute-0 podman[164317]: 2025-12-01 20:43:17.762955809 +0000 UTC m=+0.107408893 container init fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mestorf, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 01 20:43:17 compute-0 podman[164317]: 2025-12-01 20:43:17.769570118 +0000 UTC m=+0.114023202 container start fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mestorf, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 20:43:17 compute-0 podman[164317]: 2025-12-01 20:43:17.67430394 +0000 UTC m=+0.018757044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:43:17 compute-0 podman[164317]: 2025-12-01 20:43:17.772063167 +0000 UTC m=+0.116516251 container attach fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mestorf, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 01 20:43:17 compute-0 ceph-mon[75880]: pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]: {
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:     "0": [
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:         {
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "devices": [
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "/dev/loop3"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             ],
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_name": "ceph_lv0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_size": "21470642176",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "name": "ceph_lv0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "tags": {
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cluster_name": "ceph",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.crush_device_class": "",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.encrypted": "0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.objectstore": "bluestore",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osd_id": "0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.type": "block",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.vdo": "0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.with_tpm": "0"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             },
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "type": "block",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "vg_name": "ceph_vg0"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:         }
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:     ],
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:     "1": [
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:         {
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "devices": [
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "/dev/loop4"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             ],
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_name": "ceph_lv1",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_size": "21470642176",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "name": "ceph_lv1",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "tags": {
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cluster_name": "ceph",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.crush_device_class": "",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.encrypted": "0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.objectstore": "bluestore",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osd_id": "1",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.type": "block",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.vdo": "0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.with_tpm": "0"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             },
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "type": "block",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "vg_name": "ceph_vg1"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:         }
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:     ],
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:     "2": [
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:         {
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "devices": [
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "/dev/loop5"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             ],
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_name": "ceph_lv2",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_size": "21470642176",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "name": "ceph_lv2",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "tags": {
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.cluster_name": "ceph",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.crush_device_class": "",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.encrypted": "0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.objectstore": "bluestore",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osd_id": "2",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.type": "block",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.vdo": "0",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:                 "ceph.with_tpm": "0"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             },
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "type": "block",
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:             "vg_name": "ceph_vg2"
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:         }
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]:     ]
Dec 01 20:43:18 compute-0 cranky_mestorf[164396]: }
Dec 01 20:43:18 compute-0 systemd[1]: libpod-fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac.scope: Deactivated successfully.
Dec 01 20:43:18 compute-0 podman[164317]: 2025-12-01 20:43:18.092124234 +0000 UTC m=+0.436577318 container died fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mestorf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d81f464cd972ee0da467f18795c4e0ff601847745c764c6d4262198ad350957a-merged.mount: Deactivated successfully.
Dec 01 20:43:18 compute-0 podman[164317]: 2025-12-01 20:43:18.139084026 +0000 UTC m=+0.483537110 container remove fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:43:18 compute-0 systemd[1]: libpod-conmon-fb766363beaaf5d5ac1424eff97a27da5853ba2ac48cd1e3af0b539f0d5bc2ac.scope: Deactivated successfully.
Dec 01 20:43:18 compute-0 sudo[163883]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:18 compute-0 sudo[164722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:43:18 compute-0 sudo[164722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:18 compute-0 sudo[164722]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:18 compute-0 sudo[164785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:43:18 compute-0 sudo[164785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:18 compute-0 podman[165005]: 2025-12-01 20:43:18.6022531 +0000 UTC m=+0.041390487 container create 76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:43:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:18 compute-0 systemd[1]: Started libpod-conmon-76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41.scope.
Dec 01 20:43:18 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:43:18 compute-0 podman[165005]: 2025-12-01 20:43:18.666818999 +0000 UTC m=+0.105956396 container init 76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec 01 20:43:18 compute-0 podman[165005]: 2025-12-01 20:43:18.672514629 +0000 UTC m=+0.111652016 container start 76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_herschel, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 20:43:18 compute-0 podman[165005]: 2025-12-01 20:43:18.675768402 +0000 UTC m=+0.114905819 container attach 76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:43:18 compute-0 tender_herschel[165072]: 167 167
Dec 01 20:43:18 compute-0 systemd[1]: libpod-76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41.scope: Deactivated successfully.
Dec 01 20:43:18 compute-0 podman[165005]: 2025-12-01 20:43:18.676816976 +0000 UTC m=+0.115954363 container died 76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_herschel, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:43:18 compute-0 podman[165005]: 2025-12-01 20:43:18.582494607 +0000 UTC m=+0.021632024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9e48adeac1aebef8baf9acf8eff1ca9954cd6916ec2d4c43c1500f154177929-merged.mount: Deactivated successfully.
Dec 01 20:43:18 compute-0 podman[165005]: 2025-12-01 20:43:18.712040117 +0000 UTC m=+0.151177504 container remove 76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_herschel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:43:18 compute-0 systemd[1]: libpod-conmon-76be35886c0dae757b7cefee67eb05e41f71772d94747b6fee721c6478c48e41.scope: Deactivated successfully.
Dec 01 20:43:18 compute-0 podman[165210]: 2025-12-01 20:43:18.85462371 +0000 UTC m=+0.033255951 container create 557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_jackson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:43:18 compute-0 systemd[1]: Started libpod-conmon-557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd.scope.
Dec 01 20:43:18 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07d3062e026035e03d018b8eb3763a54ea6aea03805f3a1e6443737171d1fbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07d3062e026035e03d018b8eb3763a54ea6aea03805f3a1e6443737171d1fbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07d3062e026035e03d018b8eb3763a54ea6aea03805f3a1e6443737171d1fbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07d3062e026035e03d018b8eb3763a54ea6aea03805f3a1e6443737171d1fbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:43:18 compute-0 podman[165210]: 2025-12-01 20:43:18.929421021 +0000 UTC m=+0.108053302 container init 557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:43:18 compute-0 podman[165210]: 2025-12-01 20:43:18.840167833 +0000 UTC m=+0.018800094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:43:18 compute-0 podman[165210]: 2025-12-01 20:43:18.94235944 +0000 UTC m=+0.120991691 container start 557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:43:18 compute-0 podman[165210]: 2025-12-01 20:43:18.946037717 +0000 UTC m=+0.124669958 container attach 557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_jackson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:43:19 compute-0 lvm[165817]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:43:19 compute-0 lvm[165820]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:43:19 compute-0 lvm[165820]: VG ceph_vg1 finished
Dec 01 20:43:19 compute-0 lvm[165817]: VG ceph_vg0 finished
Dec 01 20:43:19 compute-0 lvm[165839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:43:19 compute-0 lvm[165839]: VG ceph_vg2 finished
Dec 01 20:43:19 compute-0 funny_jackson[165275]: {}
Dec 01 20:43:19 compute-0 systemd[1]: libpod-557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd.scope: Deactivated successfully.
Dec 01 20:43:19 compute-0 systemd[1]: libpod-557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd.scope: Consumed 1.217s CPU time.
Dec 01 20:43:19 compute-0 podman[165919]: 2025-12-01 20:43:19.746155201 +0000 UTC m=+0.023093251 container died 557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e07d3062e026035e03d018b8eb3763a54ea6aea03805f3a1e6443737171d1fbf-merged.mount: Deactivated successfully.
Dec 01 20:43:19 compute-0 podman[165919]: 2025-12-01 20:43:19.787794305 +0000 UTC m=+0.064732325 container remove 557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 01 20:43:19 compute-0 systemd[1]: libpod-conmon-557018405c1dce8c3fa4eb1db90252c1be25e67e1d744b5e03faafdc19723bfd.scope: Deactivated successfully.
Dec 01 20:43:19 compute-0 sudo[164785]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:43:19 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:43:19 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:19 compute-0 sudo[166036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:43:19 compute-0 sudo[166036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:43:19 compute-0 sudo[166036]: pam_unix(sudo:session): session closed for user root
Dec 01 20:43:19 compute-0 ceph-mon[75880]: pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:19 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:19 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:43:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:22 compute-0 ceph-mon[75880]: pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:24 compute-0 ceph-mon[75880]: pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:26 compute-0 ceph-mon[75880]: pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:28 compute-0 ceph-mon[75880]: pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:30 compute-0 ceph-mon[75880]: pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:32 compute-0 ceph-mon[75880]: pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:43:32
Dec 01 20:43:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:43:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:43:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data']
Dec 01 20:43:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:43:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:43:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:43:34 compute-0 ceph-mon[75880]: pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:36 compute-0 ceph-mon[75880]: pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:38 compute-0 ceph-mon[75880]: pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:39 compute-0 ceph-mon[75880]: pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:43:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:41 compute-0 ceph-mon[75880]: pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:43 compute-0 ceph-mon[75880]: pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:44 compute-0 podman[180491]: 2025-12-01 20:43:44.117131944 +0000 UTC m=+0.075807164 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 01 20:43:44 compute-0 podman[180503]: 2025-12-01 20:43:44.134401769 +0000 UTC m=+0.088841746 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:43:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:43:44.341 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:43:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:43:44.341 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:43:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:43:44.342 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:43:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:46 compute-0 ceph-mon[75880]: pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:47 compute-0 ceph-mon[75880]: pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:49 compute-0 ceph-mon[75880]: pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:51 compute-0 ceph-mon[75880]: pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:53 compute-0 ceph-mon[75880]: pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:55 compute-0 ceph-mon[75880]: pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:56 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Dec 01 20:43:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 01 20:43:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 01 20:43:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 01 20:43:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 01 20:43:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 01 20:43:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 01 20:43:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 01 20:43:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:43:57 compute-0 groupadd[181002]: group added to /etc/group: name=dnsmasq, GID=991
Dec 01 20:43:57 compute-0 groupadd[181002]: group added to /etc/gshadow: name=dnsmasq
Dec 01 20:43:57 compute-0 ceph-mon[75880]: pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:57 compute-0 groupadd[181002]: new group: name=dnsmasq, GID=991
Dec 01 20:43:57 compute-0 useradd[181009]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 01 20:43:58 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec 01 20:43:58 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 01 20:43:58 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Dec 01 20:43:58 compute-0 groupadd[181021]: group added to /etc/group: name=clevis, GID=990
Dec 01 20:43:58 compute-0 groupadd[181021]: group added to /etc/gshadow: name=clevis
Dec 01 20:43:58 compute-0 groupadd[181021]: new group: name=clevis, GID=990
Dec 01 20:43:58 compute-0 useradd[181028]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 01 20:43:58 compute-0 usermod[181038]: add 'clevis' to group 'tss'
Dec 01 20:43:58 compute-0 usermod[181038]: add 'clevis' to shadow group 'tss'
Dec 01 20:43:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:43:59 compute-0 ceph-mon[75880]: pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:00 compute-0 polkitd[44197]: Reloading rules
Dec 01 20:44:00 compute-0 polkitd[44197]: Collecting garbage unconditionally...
Dec 01 20:44:00 compute-0 polkitd[44197]: Loading rules from directory /etc/polkit-1/rules.d
Dec 01 20:44:00 compute-0 polkitd[44197]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 01 20:44:00 compute-0 polkitd[44197]: Finished loading, compiling and executing 3 rules
Dec 01 20:44:00 compute-0 polkitd[44197]: Reloading rules
Dec 01 20:44:00 compute-0 polkitd[44197]: Collecting garbage unconditionally...
Dec 01 20:44:00 compute-0 polkitd[44197]: Loading rules from directory /etc/polkit-1/rules.d
Dec 01 20:44:00 compute-0 polkitd[44197]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 01 20:44:00 compute-0 polkitd[44197]: Finished loading, compiling and executing 3 rules
Dec 01 20:44:01 compute-0 ceph-mon[75880]: pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:02 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 01 20:44:02 compute-0 sshd[1007]: Received signal 15; terminating.
Dec 01 20:44:02 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 01 20:44:02 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 01 20:44:02 compute-0 systemd[1]: sshd.service: Consumed 4.018s CPU time, read 564.0K from disk, written 4.0K to disk.
Dec 01 20:44:02 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 01 20:44:02 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 01 20:44:02 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 01 20:44:02 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 01 20:44:02 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 01 20:44:02 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 01 20:44:02 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 01 20:44:02 compute-0 sshd[181830]: Server listening on 0.0.0.0 port 22.
Dec 01 20:44:02 compute-0 sshd[181830]: Server listening on :: port 22.
Dec 01 20:44:02 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 01 20:44:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:44:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:44:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:44:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:44:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:44:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:44:03 compute-0 ceph-mon[75880]: pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:04 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:44:04 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:44:04 compute-0 systemd[1]: Reloading.
Dec 01 20:44:05 compute-0 systemd-rc-local-generator[182081]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:05 compute-0 systemd-sysv-generator[182089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:44:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:07 compute-0 ceph-mon[75880]: pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:07 compute-0 sudo[162974]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:08 compute-0 ceph-mon[75880]: pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:08 compute-0 sudo[186191]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwokdjfptsprbmfqyrlrgdvhufvpiny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621847.9761016-336-274419516157010/AnsiballZ_systemd.py'
Dec 01 20:44:08 compute-0 sudo[186191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:08 compute-0 python3.9[186222]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:44:09 compute-0 systemd[1]: Reloading.
Dec 01 20:44:09 compute-0 systemd-sysv-generator[186605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:09 compute-0 systemd-rc-local-generator[186600]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:09 compute-0 sudo[186191]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:09 compute-0 sudo[187464]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umexribjxnrdcotxkmeqrquealyvyyxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621849.541685-336-51794485984492/AnsiballZ_systemd.py'
Dec 01 20:44:09 compute-0 sudo[187464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:10 compute-0 ceph-mon[75880]: pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:10 compute-0 python3.9[187481]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:44:10 compute-0 systemd[1]: Reloading.
Dec 01 20:44:10 compute-0 systemd-rc-local-generator[187921]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:10 compute-0 systemd-sysv-generator[187925]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:10 compute-0 sudo[187464]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:10 compute-0 sudo[188716]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwrzbjhyupvyjxfjptfnastqxhuwqspd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621850.6718946-336-21077235191883/AnsiballZ_systemd.py'
Dec 01 20:44:10 compute-0 sudo[188716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:11 compute-0 python3.9[188740]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:44:11 compute-0 systemd[1]: Reloading.
Dec 01 20:44:11 compute-0 systemd-rc-local-generator[189206]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:11 compute-0 systemd-sysv-generator[189213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:11 compute-0 sudo[188716]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:12 compute-0 sudo[190092]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvtbtfqvftwtcsuenxkyxoqvauqacuax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621851.7527127-336-128055208982220/AnsiballZ_systemd.py'
Dec 01 20:44:12 compute-0 sudo[190092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:12 compute-0 ceph-mon[75880]: pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:12 compute-0 python3.9[190112]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:44:12 compute-0 systemd[1]: Reloading.
Dec 01 20:44:12 compute-0 systemd-rc-local-generator[190507]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:12 compute-0 systemd-sysv-generator[190512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:12 compute-0 sudo[190092]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:13 compute-0 sudo[191254]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegcqprboanmdljcgumcqphmzmwcqxct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621852.8433313-365-271173808605039/AnsiballZ_systemd.py'
Dec 01 20:44:13 compute-0 sudo[191254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:44:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:44:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.420s CPU time.
Dec 01 20:44:13 compute-0 systemd[1]: run-r70d8976ca18e4cbbbdf52288c0fc070b.service: Deactivated successfully.
Dec 01 20:44:13 compute-0 python3.9[191256]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:13 compute-0 systemd[1]: Reloading.
Dec 01 20:44:13 compute-0 systemd-rc-local-generator[191288]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:13 compute-0 systemd-sysv-generator[191292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:13 compute-0 sudo[191254]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:14 compute-0 ceph-mon[75880]: pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:14 compute-0 sudo[191457]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gztuzeusppdpqsehizyilumchloyqwmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621854.0897205-365-27038340688185/AnsiballZ_systemd.py'
Dec 01 20:44:14 compute-0 sudo[191457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:14 compute-0 podman[191419]: 2025-12-01 20:44:14.476133416 +0000 UTC m=+0.070916120 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 01 20:44:14 compute-0 podman[191420]: 2025-12-01 20:44:14.532968951 +0000 UTC m=+0.127820487 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 01 20:44:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:14 compute-0 python3.9[191478]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:14 compute-0 systemd[1]: Reloading.
Dec 01 20:44:14 compute-0 systemd-sysv-generator[191519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:14 compute-0 systemd-rc-local-generator[191515]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:15 compute-0 sudo[191457]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:15 compute-0 sudo[191677]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhsvzvryclifigpijcfidkgwmyzdywjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621855.2652802-365-52089797788127/AnsiballZ_systemd.py'
Dec 01 20:44:15 compute-0 sudo[191677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:15 compute-0 python3.9[191679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:15 compute-0 systemd[1]: Reloading.
Dec 01 20:44:15 compute-0 systemd-rc-local-generator[191710]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:15 compute-0 systemd-sysv-generator[191714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:16 compute-0 ceph-mon[75880]: pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:16 compute-0 sudo[191677]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:16 compute-0 sudo[191868]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fubtcqfjphomtrqwhsltzyspsnvszjmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621856.4140427-365-201347276776539/AnsiballZ_systemd.py'
Dec 01 20:44:16 compute-0 sudo[191868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:17 compute-0 python3.9[191870]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:17 compute-0 sudo[191868]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:17 compute-0 sudo[192023]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltiukpmvwtroziclnodxzoytbdutrqrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621857.3629136-365-97282230572724/AnsiballZ_systemd.py'
Dec 01 20:44:17 compute-0 sudo[192023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:17 compute-0 python3.9[192025]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:18 compute-0 systemd[1]: Reloading.
Dec 01 20:44:18 compute-0 ceph-mon[75880]: pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:18 compute-0 systemd-sysv-generator[192056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:18 compute-0 systemd-rc-local-generator[192053]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:18 compute-0 sudo[192023]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:18 compute-0 sudo[192213]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmpgcwislpgnlaalmclurjganxifswgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621858.6581233-401-166318644140162/AnsiballZ_systemd.py'
Dec 01 20:44:18 compute-0 sudo[192213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:19 compute-0 python3.9[192215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 01 20:44:19 compute-0 systemd[1]: Reloading.
Dec 01 20:44:19 compute-0 systemd-sysv-generator[192250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:44:19 compute-0 systemd-rc-local-generator[192245]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:44:19 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 01 20:44:19 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 01 20:44:19 compute-0 sudo[192213]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:20 compute-0 sudo[192336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:44:20 compute-0 sudo[192336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:20 compute-0 sudo[192336]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:20 compute-0 sudo[192381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:44:20 compute-0 sudo[192381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:20 compute-0 sudo[192456]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzuhxedvcpchuqravwwtwpevpvojldox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621859.8600678-409-253899238608281/AnsiballZ_systemd.py'
Dec 01 20:44:20 compute-0 sudo[192456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:20 compute-0 ceph-mon[75880]: pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:20 compute-0 python3.9[192458]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:20 compute-0 sudo[192456]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:20 compute-0 sudo[192381]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:44:20 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:44:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:44:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:44:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:44:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:44:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:44:20 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:44:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:44:20 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:44:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:44:20 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:44:20 compute-0 sudo[192570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:44:20 compute-0 sudo[192570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:20 compute-0 sudo[192570]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:20 compute-0 sudo[192618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:44:20 compute-0 sudo[192618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:20 compute-0 sudo[192693]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skuwdmvjzizavfuwxzhtwehpcevqhkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621860.63278-409-215497122856916/AnsiballZ_systemd.py'
Dec 01 20:44:20 compute-0 sudo[192693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:21 compute-0 podman[192707]: 2025-12-01 20:44:21.08160037 +0000 UTC m=+0.045224709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:44:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:44:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:44:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:44:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:44:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:44:21 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:44:21 compute-0 podman[192707]: 2025-12-01 20:44:21.174885594 +0000 UTC m=+0.138509843 container create 5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 01 20:44:21 compute-0 python3.9[192695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:21 compute-0 systemd[1]: Started libpod-conmon-5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352.scope.
Dec 01 20:44:21 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:44:21 compute-0 podman[192707]: 2025-12-01 20:44:21.287318342 +0000 UTC m=+0.250942641 container init 5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:44:21 compute-0 podman[192707]: 2025-12-01 20:44:21.294577081 +0000 UTC m=+0.258201330 container start 5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:44:21 compute-0 podman[192707]: 2025-12-01 20:44:21.297596097 +0000 UTC m=+0.261220396 container attach 5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:44:21 compute-0 intelligent_lichterman[192726]: 167 167
Dec 01 20:44:21 compute-0 systemd[1]: libpod-5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352.scope: Deactivated successfully.
Dec 01 20:44:21 compute-0 podman[192707]: 2025-12-01 20:44:21.300491198 +0000 UTC m=+0.264115437 container died 5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:44:21 compute-0 sudo[192693]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c4a06bcba4f1fdaa23e61e5be06c18dc0ffe401646b68dba96ac58f3cb64ac-merged.mount: Deactivated successfully.
Dec 01 20:44:21 compute-0 podman[192707]: 2025-12-01 20:44:21.339424307 +0000 UTC m=+0.303048556 container remove 5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:44:21 compute-0 systemd[1]: libpod-conmon-5c661df3d57f21f29ed4940875a4889b4458ca2d2e06452af88d1ffaebb35352.scope: Deactivated successfully.
Dec 01 20:44:21 compute-0 podman[192797]: 2025-12-01 20:44:21.513688017 +0000 UTC m=+0.046331114 container create 103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:44:21 compute-0 systemd[1]: Started libpod-conmon-103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e.scope.
Dec 01 20:44:21 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60b2f1d5e1ed114db76140dcfe6838c3ded7aef6da73881dcd354ae5750ca0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60b2f1d5e1ed114db76140dcfe6838c3ded7aef6da73881dcd354ae5750ca0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60b2f1d5e1ed114db76140dcfe6838c3ded7aef6da73881dcd354ae5750ca0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60b2f1d5e1ed114db76140dcfe6838c3ded7aef6da73881dcd354ae5750ca0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60b2f1d5e1ed114db76140dcfe6838c3ded7aef6da73881dcd354ae5750ca0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:21 compute-0 podman[192797]: 2025-12-01 20:44:21.495127561 +0000 UTC m=+0.027770748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:44:21 compute-0 podman[192797]: 2025-12-01 20:44:21.589666644 +0000 UTC m=+0.122309761 container init 103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:44:21 compute-0 podman[192797]: 2025-12-01 20:44:21.601948102 +0000 UTC m=+0.134591199 container start 103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:44:21 compute-0 podman[192797]: 2025-12-01 20:44:21.605437693 +0000 UTC m=+0.138080870 container attach 103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 20:44:21 compute-0 sudo[192921]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxdedkqhwvusajnsnwwgvvqaqgwzsnhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621861.4549267-409-214384714178828/AnsiballZ_systemd.py'
Dec 01 20:44:21 compute-0 sudo[192921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:22 compute-0 pensive_snyder[192843]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:44:22 compute-0 pensive_snyder[192843]: --> All data devices are unavailable
Dec 01 20:44:22 compute-0 python3.9[192925]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:22 compute-0 podman[192797]: 2025-12-01 20:44:22.081718144 +0000 UTC m=+0.614361281 container died 103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:44:22 compute-0 systemd[1]: libpod-103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e.scope: Deactivated successfully.
Dec 01 20:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a60b2f1d5e1ed114db76140dcfe6838c3ded7aef6da73881dcd354ae5750ca0c-merged.mount: Deactivated successfully.
Dec 01 20:44:22 compute-0 podman[192797]: 2025-12-01 20:44:22.134227071 +0000 UTC m=+0.666870188 container remove 103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:44:22 compute-0 systemd[1]: libpod-conmon-103da577891e4d8700116ab75abd9d79ed148e9b85bc9c455b2a9eadf4e3514e.scope: Deactivated successfully.
Dec 01 20:44:22 compute-0 sudo[192921]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:22 compute-0 sudo[192618]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:22 compute-0 ceph-mon[75880]: pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:22 compute-0 sudo[192954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:44:22 compute-0 sudo[192954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:22 compute-0 sudo[192954]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:22 compute-0 sudo[193002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:44:22 compute-0 sudo[193002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:22 compute-0 podman[193138]: 2025-12-01 20:44:22.557319595 +0000 UTC m=+0.054323416 container create f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:44:22 compute-0 sudo[193179]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfbrlqnfpumczulewlfnbkvmhlhcxnmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621862.287961-409-240477417091077/AnsiballZ_systemd.py'
Dec 01 20:44:22 compute-0 sudo[193179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:22 compute-0 systemd[1]: Started libpod-conmon-f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5.scope.
Dec 01 20:44:22 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:44:22 compute-0 podman[193138]: 2025-12-01 20:44:22.532532572 +0000 UTC m=+0.029536453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:44:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:22 compute-0 podman[193138]: 2025-12-01 20:44:22.649866455 +0000 UTC m=+0.146870336 container init f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cartwright, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:44:22 compute-0 podman[193138]: 2025-12-01 20:44:22.66271261 +0000 UTC m=+0.159716411 container start f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:44:22 compute-0 podman[193138]: 2025-12-01 20:44:22.666992816 +0000 UTC m=+0.163996717 container attach f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cartwright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:44:22 compute-0 beautiful_cartwright[193185]: 167 167
Dec 01 20:44:22 compute-0 systemd[1]: libpod-f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5.scope: Deactivated successfully.
Dec 01 20:44:22 compute-0 podman[193138]: 2025-12-01 20:44:22.674374829 +0000 UTC m=+0.171378650 container died f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6a7ba8ee9ce389224a3d84614868f1720a34f05271ff5bf11cfcf9cd28b6638-merged.mount: Deactivated successfully.
Dec 01 20:44:22 compute-0 podman[193138]: 2025-12-01 20:44:22.724990216 +0000 UTC m=+0.221994017 container remove f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cartwright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:44:22 compute-0 systemd[1]: libpod-conmon-f60f9c588b4a8cf77a640bb1834f8f31574609302cac31c86abaad968ddf8ed5.scope: Deactivated successfully.
Dec 01 20:44:22 compute-0 python3.9[193182]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:22 compute-0 podman[193208]: 2025-12-01 20:44:22.893759663 +0000 UTC m=+0.050261968 container create f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:44:22 compute-0 systemd[1]: Started libpod-conmon-f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5.scope.
Dec 01 20:44:22 compute-0 podman[193208]: 2025-12-01 20:44:22.874732892 +0000 UTC m=+0.031235217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:44:22 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29153bc4bd652d477e37c0948870469ffb6ae59e4a4ebdea46c6952f5f4a7edf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29153bc4bd652d477e37c0948870469ffb6ae59e4a4ebdea46c6952f5f4a7edf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29153bc4bd652d477e37c0948870469ffb6ae59e4a4ebdea46c6952f5f4a7edf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29153bc4bd652d477e37c0948870469ffb6ae59e4a4ebdea46c6952f5f4a7edf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:22 compute-0 podman[193208]: 2025-12-01 20:44:22.989239585 +0000 UTC m=+0.145741900 container init f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:44:22 compute-0 sudo[193179]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:23 compute-0 podman[193208]: 2025-12-01 20:44:23.000363457 +0000 UTC m=+0.156865842 container start f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:44:23 compute-0 podman[193208]: 2025-12-01 20:44:23.006009815 +0000 UTC m=+0.162512140 container attach f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:44:23 compute-0 fervent_feynman[193228]: {
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:     "0": [
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:         {
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "devices": [
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "/dev/loop3"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             ],
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_name": "ceph_lv0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_size": "21470642176",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "name": "ceph_lv0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "tags": {
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cluster_name": "ceph",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.crush_device_class": "",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.encrypted": "0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.objectstore": "bluestore",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osd_id": "0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.type": "block",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.vdo": "0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.with_tpm": "0"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             },
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "type": "block",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "vg_name": "ceph_vg0"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:         }
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:     ],
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:     "1": [
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:         {
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "devices": [
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "/dev/loop4"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             ],
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_name": "ceph_lv1",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_size": "21470642176",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "name": "ceph_lv1",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "tags": {
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cluster_name": "ceph",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.crush_device_class": "",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.encrypted": "0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.objectstore": "bluestore",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osd_id": "1",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.type": "block",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.vdo": "0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.with_tpm": "0"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             },
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "type": "block",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "vg_name": "ceph_vg1"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:         }
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:     ],
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:     "2": [
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:         {
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "devices": [
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "/dev/loop5"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             ],
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_name": "ceph_lv2",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_size": "21470642176",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "name": "ceph_lv2",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "tags": {
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.cluster_name": "ceph",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.crush_device_class": "",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.encrypted": "0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.objectstore": "bluestore",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osd_id": "2",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.type": "block",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.vdo": "0",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:                 "ceph.with_tpm": "0"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             },
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "type": "block",
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:             "vg_name": "ceph_vg2"
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:         }
Dec 01 20:44:23 compute-0 fervent_feynman[193228]:     ]
Dec 01 20:44:23 compute-0 fervent_feynman[193228]: }
Dec 01 20:44:23 compute-0 systemd[1]: libpod-f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5.scope: Deactivated successfully.
Dec 01 20:44:23 compute-0 podman[193208]: 2025-12-01 20:44:23.349134754 +0000 UTC m=+0.505637059 container died f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-29153bc4bd652d477e37c0948870469ffb6ae59e4a4ebdea46c6952f5f4a7edf-merged.mount: Deactivated successfully.
Dec 01 20:44:23 compute-0 podman[193208]: 2025-12-01 20:44:23.401160586 +0000 UTC m=+0.557662891 container remove f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:44:23 compute-0 systemd[1]: libpod-conmon-f599745b35bf0f43e5ddea5f479d64ebd59c79848b7d965c26b0a14b31c9afe5.scope: Deactivated successfully.
Dec 01 20:44:23 compute-0 sudo[193002]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:23 compute-0 sudo[193398]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzqhkzunjojfnhdfmawxdsycymztfsvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621863.1516793-409-4641150565792/AnsiballZ_systemd.py'
Dec 01 20:44:23 compute-0 sudo[193398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:23 compute-0 sudo[193401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:44:23 compute-0 sudo[193401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:23 compute-0 sudo[193401]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:23 compute-0 sudo[193426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:44:23 compute-0 sudo[193426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:23 compute-0 python3.9[193400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:23 compute-0 podman[193466]: 2025-12-01 20:44:23.833005975 +0000 UTC m=+0.038531037 container create eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_faraday, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:44:23 compute-0 sudo[193398]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:23 compute-0 systemd[1]: Started libpod-conmon-eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46.scope.
Dec 01 20:44:23 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:44:23 compute-0 podman[193466]: 2025-12-01 20:44:23.905550415 +0000 UTC m=+0.111075507 container init eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_faraday, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:44:23 compute-0 podman[193466]: 2025-12-01 20:44:23.814956765 +0000 UTC m=+0.020481847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:44:23 compute-0 podman[193466]: 2025-12-01 20:44:23.917013876 +0000 UTC m=+0.122538938 container start eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:44:23 compute-0 podman[193466]: 2025-12-01 20:44:23.920479596 +0000 UTC m=+0.126004678 container attach eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 01 20:44:23 compute-0 compassionate_faraday[193486]: 167 167
Dec 01 20:44:23 compute-0 systemd[1]: libpod-eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46.scope: Deactivated successfully.
Dec 01 20:44:23 compute-0 podman[193466]: 2025-12-01 20:44:23.925243266 +0000 UTC m=+0.130768328 container died eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a97c3f4f8094d6610f3ddcb6aab82f3a35265015b3a039cb6d1c85ed57a1185-merged.mount: Deactivated successfully.
Dec 01 20:44:23 compute-0 podman[193466]: 2025-12-01 20:44:23.96147728 +0000 UTC m=+0.167002342 container remove eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:44:23 compute-0 systemd[1]: libpod-conmon-eb86ab5e4f1683a1998e616cea93a5c253dbd7d472737c37b8f8394ce03dfd46.scope: Deactivated successfully.
Dec 01 20:44:24 compute-0 podman[193604]: 2025-12-01 20:44:24.128181461 +0000 UTC m=+0.046844400 container create 0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:44:24 compute-0 systemd[1]: Started libpod-conmon-0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723.scope.
Dec 01 20:44:24 compute-0 ceph-mon[75880]: pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:24 compute-0 podman[193604]: 2025-12-01 20:44:24.105390012 +0000 UTC m=+0.024053041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:44:24 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72f7e71bc88996a553c0fcff5b715f5398c5b42cd32072a0bebe563bc5fddb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72f7e71bc88996a553c0fcff5b715f5398c5b42cd32072a0bebe563bc5fddb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72f7e71bc88996a553c0fcff5b715f5398c5b42cd32072a0bebe563bc5fddb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72f7e71bc88996a553c0fcff5b715f5398c5b42cd32072a0bebe563bc5fddb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:44:24 compute-0 sudo[193674]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zewanjuegygulrmcgvdfinaztyvpsiie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621863.9710333-409-99475297118324/AnsiballZ_systemd.py'
Dec 01 20:44:24 compute-0 sudo[193674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:24 compute-0 podman[193604]: 2025-12-01 20:44:24.230268753 +0000 UTC m=+0.148931702 container init 0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_saha, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:44:24 compute-0 podman[193604]: 2025-12-01 20:44:24.239272167 +0000 UTC m=+0.157935146 container start 0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:44:24 compute-0 podman[193604]: 2025-12-01 20:44:24.243114479 +0000 UTC m=+0.161777438 container attach 0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:44:24 compute-0 python3.9[193676]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:24 compute-0 sudo[193674]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:24 compute-0 lvm[193855]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:44:24 compute-0 lvm[193855]: VG ceph_vg1 finished
Dec 01 20:44:24 compute-0 lvm[193854]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:44:24 compute-0 lvm[193854]: VG ceph_vg0 finished
Dec 01 20:44:24 compute-0 lvm[193869]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:44:24 compute-0 lvm[193869]: VG ceph_vg2 finished
Dec 01 20:44:24 compute-0 sudo[193910]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdjetcdqsgquznqwizmezladjzbyvjuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621864.716774-409-213056931448692/AnsiballZ_systemd.py'
Dec 01 20:44:24 compute-0 sudo[193910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:24 compute-0 unruffled_saha[193654]: {}
Dec 01 20:44:25 compute-0 systemd[1]: libpod-0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723.scope: Deactivated successfully.
Dec 01 20:44:25 compute-0 podman[193604]: 2025-12-01 20:44:25.038733548 +0000 UTC m=+0.957396497 container died 0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_saha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:44:25 compute-0 systemd[1]: libpod-0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723.scope: Consumed 1.279s CPU time.
Dec 01 20:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a72f7e71bc88996a553c0fcff5b715f5398c5b42cd32072a0bebe563bc5fddb-merged.mount: Deactivated successfully.
Dec 01 20:44:25 compute-0 podman[193604]: 2025-12-01 20:44:25.086222067 +0000 UTC m=+1.004885006 container remove 0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:44:25 compute-0 systemd[1]: libpod-conmon-0a848308bd3619857ccf5092ab6fb4be43fd082792db8c0add6506f1217a7723.scope: Deactivated successfully.
Dec 01 20:44:25 compute-0 sudo[193426]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:44:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:44:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:44:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:44:25 compute-0 sudo[193926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:44:25 compute-0 sudo[193926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:44:25 compute-0 sudo[193926]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:25 compute-0 python3.9[193912]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:25 compute-0 sudo[193910]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:25 compute-0 sudo[194103]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyluuktywnhvjlzdsexgrmydloqxcleu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621865.4961426-409-215598018795624/AnsiballZ_systemd.py'
Dec 01 20:44:25 compute-0 sudo[194103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:26 compute-0 python3.9[194105]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:26 compute-0 sudo[194103]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:26 compute-0 ceph-mon[75880]: pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:44:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:44:26 compute-0 sudo[194258]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpjjatlyavyyvagehuuwunxaqiqcczqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621866.2577946-409-266714913853146/AnsiballZ_systemd.py'
Dec 01 20:44:26 compute-0 sudo[194258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:26 compute-0 python3.9[194260]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:26 compute-0 sudo[194258]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:27 compute-0 sudo[194413]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhphpyyguobgujvukxzvwvyoifwwymgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621867.023532-409-41856773909428/AnsiballZ_systemd.py'
Dec 01 20:44:27 compute-0 sudo[194413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:27 compute-0 python3.9[194415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:27 compute-0 sudo[194413]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:28 compute-0 sudo[194568]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uddyqvpijtzcwjgycidsymtaefmawndg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621867.7839258-409-217621194115473/AnsiballZ_systemd.py'
Dec 01 20:44:28 compute-0 sudo[194568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:28 compute-0 ceph-mon[75880]: pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:28 compute-0 python3.9[194570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:28 compute-0 sudo[194568]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:28 compute-0 sudo[194723]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygxpvyhonnisyajcqxsvexkliygmyyma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621868.6778882-409-209876382462101/AnsiballZ_systemd.py'
Dec 01 20:44:28 compute-0 sudo[194723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:29 compute-0 python3.9[194725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:29 compute-0 sudo[194723]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:29 compute-0 sudo[194878]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjtjdlxwcmqicdrwjefmypavrpjcgea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621869.5904996-409-214394105138122/AnsiballZ_systemd.py'
Dec 01 20:44:29 compute-0 sudo[194878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:30 compute-0 python3.9[194880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:30 compute-0 ceph-mon[75880]: pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:30 compute-0 sudo[194878]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:30 compute-0 sudo[195033]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvvhzeknfmmznocprjomluymnvujoeub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621870.4144015-409-117370863568383/AnsiballZ_systemd.py'
Dec 01 20:44:30 compute-0 sudo[195033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:30 compute-0 python3.9[195035]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 01 20:44:31 compute-0 sudo[195033]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:31 compute-0 sudo[195188]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqhibpnpgabbwawpxsethpfnfijcwaoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621871.382638-511-131107915823302/AnsiballZ_file.py'
Dec 01 20:44:31 compute-0 sudo[195188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:31 compute-0 python3.9[195190]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:44:31 compute-0 sudo[195188]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:32 compute-0 ceph-mon[75880]: pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:32 compute-0 sudo[195340]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxgaxsxvqwxaevvtakivnemaberohsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621872.0740297-511-160834067250433/AnsiballZ_file.py'
Dec 01 20:44:32 compute-0 sudo[195340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:44:32
Dec 01 20:44:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:44:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:44:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms']
Dec 01 20:44:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:44:32 compute-0 python3.9[195342]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:44:32 compute-0 sudo[195340]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:32 compute-0 sudo[195492]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktofqfzaunmaucfuqxlisxzcezwnxvav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621872.6990423-511-98388309446481/AnsiballZ_file.py'
Dec 01 20:44:32 compute-0 sudo[195492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:33 compute-0 python3.9[195494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:44:33 compute-0 sudo[195492]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:44:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:44:33 compute-0 sudo[195644]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyinwenczsgktyblnkrmcuzqgrvlsyvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621873.3058188-511-263777853306120/AnsiballZ_file.py'
Dec 01 20:44:33 compute-0 sudo[195644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:33 compute-0 python3.9[195646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:44:33 compute-0 sudo[195644]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:34 compute-0 ceph-mon[75880]: pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:34 compute-0 sudo[195796]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsybfdcablvtlfynuuesojgdzvwasxiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621873.9583614-511-189796344109792/AnsiballZ_file.py'
Dec 01 20:44:34 compute-0 sudo[195796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:34 compute-0 python3.9[195798]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:44:34 compute-0 sudo[195796]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:34 compute-0 sudo[195948]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iadoxyzjgypmnfufpajebiucfzhzoidd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621874.7057052-511-90592533300569/AnsiballZ_file.py'
Dec 01 20:44:34 compute-0 sudo[195948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:35 compute-0 python3.9[195950]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:44:35 compute-0 sudo[195948]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:35 compute-0 sudo[196100]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtoznfsruambjzmdoalmwanynnceahxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621875.3338187-554-159514256288136/AnsiballZ_stat.py'
Dec 01 20:44:35 compute-0 sudo[196100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:35 compute-0 python3.9[196102]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:36 compute-0 sudo[196100]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:36 compute-0 ceph-mon[75880]: pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:36 compute-0 sudo[196225]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjdgtashviscctbobwqfetjjkcojssql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621875.3338187-554-159514256288136/AnsiballZ_copy.py'
Dec 01 20:44:36 compute-0 sudo[196225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:36 compute-0 python3.9[196227]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621875.3338187-554-159514256288136/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:36 compute-0 sudo[196225]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:37 compute-0 sudo[196377]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmsosnazhvwrxbirpdnmemuactdbnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621876.925744-554-107010059350100/AnsiballZ_stat.py'
Dec 01 20:44:37 compute-0 sudo[196377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:37 compute-0 python3.9[196379]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:37 compute-0 sudo[196377]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:37 compute-0 sudo[196502]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfpvselwfqjrkmlyjbscuemrdmpczdvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621876.925744-554-107010059350100/AnsiballZ_copy.py'
Dec 01 20:44:37 compute-0 sudo[196502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:37 compute-0 python3.9[196504]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621876.925744-554-107010059350100/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:37 compute-0 sudo[196502]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:38 compute-0 ceph-mon[75880]: pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:38 compute-0 sudo[196654]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swuofketbsjabjjjdtdbdsytgsbrzczp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621878.1655526-554-23114931156482/AnsiballZ_stat.py'
Dec 01 20:44:38 compute-0 sudo[196654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:38 compute-0 python3.9[196656]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:38 compute-0 sudo[196654]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:39 compute-0 sudo[196779]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jibqidvjvxphovzkftrggcspdndbvxhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621878.1655526-554-23114931156482/AnsiballZ_copy.py'
Dec 01 20:44:39 compute-0 sudo[196779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:39 compute-0 python3.9[196781]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621878.1655526-554-23114931156482/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:39 compute-0 sudo[196779]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:39 compute-0 ceph-mon[75880]: pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:39 compute-0 sudo[196931]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgnibuciyoisjkxnjyhashdxlbuxyjbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621879.419921-554-21763584504282/AnsiballZ_stat.py'
Dec 01 20:44:39 compute-0 sudo[196931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:39 compute-0 python3.9[196933]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:39 compute-0 sudo[196931]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:40 compute-0 sudo[197056]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbgvworxsmdmticwcccyhxanhdshmxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621879.419921-554-21763584504282/AnsiballZ_copy.py'
Dec 01 20:44:40 compute-0 sudo[197056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:44:40 compute-0 python3.9[197058]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621879.419921-554-21763584504282/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:40 compute-0 sudo[197056]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:40 compute-0 sudo[197208]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycgtuywfamwshyvfqykncqodehkfopek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621880.6527002-554-279340898698153/AnsiballZ_stat.py'
Dec 01 20:44:40 compute-0 sudo[197208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:41 compute-0 python3.9[197210]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:41 compute-0 sudo[197208]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:41 compute-0 sudo[197333]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaidxxxdlhljqijmoxlkhlrxxlshozpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621880.6527002-554-279340898698153/AnsiballZ_copy.py'
Dec 01 20:44:41 compute-0 sudo[197333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:41 compute-0 python3.9[197335]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621880.6527002-554-279340898698153/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:41 compute-0 sudo[197333]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:41 compute-0 ceph-mon[75880]: pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:42 compute-0 sudo[197485]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgocvhoghkxliarlnkbugiekzabdyvsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621881.8607826-554-119266738739534/AnsiballZ_stat.py'
Dec 01 20:44:42 compute-0 sudo[197485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:42 compute-0 python3.9[197487]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:42 compute-0 sudo[197485]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:42 compute-0 sudo[197610]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gljjyxjgomscofhvfkazohkjmbjehkxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621881.8607826-554-119266738739534/AnsiballZ_copy.py'
Dec 01 20:44:42 compute-0 sudo[197610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:42 compute-0 python3.9[197612]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621881.8607826-554-119266738739534/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:42 compute-0 sudo[197610]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:43 compute-0 sudo[197762]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umibmiyaamjdhmpbrwgpwjdgnyehoctz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621883.0962913-554-30638312192083/AnsiballZ_stat.py'
Dec 01 20:44:43 compute-0 sudo[197762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:43 compute-0 python3.9[197764]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:43 compute-0 sudo[197762]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:43 compute-0 ceph-mon[75880]: pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:43 compute-0 sudo[197885]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqitnmwxraiodwwjpvbgerongoqmsbmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621883.0962913-554-30638312192083/AnsiballZ_copy.py'
Dec 01 20:44:43 compute-0 sudo[197885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:44 compute-0 python3.9[197887]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621883.0962913-554-30638312192083/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:44 compute-0 sudo[197885]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:44:44.343 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:44:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:44:44.344 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:44:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:44:44.344 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:44:44 compute-0 sudo[198037]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezuydgxnzqbdajxbxipdocbcjinspufn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621884.2705572-554-134631603966161/AnsiballZ_stat.py'
Dec 01 20:44:44 compute-0 sudo[198037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:44 compute-0 podman[198039]: 2025-12-01 20:44:44.624101476 +0000 UTC m=+0.062056350 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 01 20:44:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:44 compute-0 podman[198040]: 2025-12-01 20:44:44.662106245 +0000 UTC m=+0.101063201 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 01 20:44:44 compute-0 python3.9[198041]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:44 compute-0 sudo[198037]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:45 compute-0 sudo[198205]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvcsxdcluzuevhpyfyryajjzqkjbqkxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621884.2705572-554-134631603966161/AnsiballZ_copy.py'
Dec 01 20:44:45 compute-0 sudo[198205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:45 compute-0 python3.9[198207]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764621884.2705572-554-134631603966161/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:45 compute-0 sudo[198205]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:45 compute-0 ceph-mon[75880]: pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:45 compute-0 sudo[198357]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-begyzqrseehpanmdgvwaexcfndjrbfet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621885.5032637-667-124704026697598/AnsiballZ_command.py'
Dec 01 20:44:45 compute-0 sudo[198357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:45 compute-0 python3.9[198359]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 01 20:44:45 compute-0 sudo[198357]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:46 compute-0 sudo[198510]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqivswlyonqzojrmjtzfbdwrmyfgbfnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621886.1436663-676-129739723126592/AnsiballZ_file.py'
Dec 01 20:44:46 compute-0 sudo[198510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:46 compute-0 python3.9[198512]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:46 compute-0 sudo[198510]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:47 compute-0 sudo[198662]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywvtawvuhvaatmayuexlthbyymelaazu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621886.875401-676-103115066273372/AnsiballZ_file.py'
Dec 01 20:44:47 compute-0 sudo[198662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:47 compute-0 python3.9[198664]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:47 compute-0 sudo[198662]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:47 compute-0 ceph-mon[75880]: pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:47 compute-0 sudo[198814]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgognbswxdlgkecvnmehcxbcrpoyxezy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621887.556552-676-33695025496863/AnsiballZ_file.py'
Dec 01 20:44:47 compute-0 sudo[198814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:48 compute-0 python3.9[198816]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:48 compute-0 sudo[198814]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:48 compute-0 sudo[198966]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjktkksdiqxhcbqzabdftrcvyyjvzqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621888.3138454-676-273449147067996/AnsiballZ_file.py'
Dec 01 20:44:48 compute-0 sudo[198966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:48 compute-0 python3.9[198968]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:48 compute-0 sudo[198966]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:49 compute-0 sudo[199118]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbpaifvstvoehgmgpbzmfhtpooxbtrlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621889.0134926-676-3139912667847/AnsiballZ_file.py'
Dec 01 20:44:49 compute-0 sudo[199118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:49 compute-0 python3.9[199120]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:49 compute-0 sudo[199118]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:49 compute-0 ceph-mon[75880]: pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:49 compute-0 sudo[199270]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmsriftffxborwzobwfnstislnvrdsuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621889.70477-676-76678752168657/AnsiballZ_file.py'
Dec 01 20:44:49 compute-0 sudo[199270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:50 compute-0 python3.9[199272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:50 compute-0 sudo[199270]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:50 compute-0 sudo[199422]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxnkkdzekpqbmiyfyaxlxlfrmpnxnof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621890.2961833-676-26678123583559/AnsiballZ_file.py'
Dec 01 20:44:50 compute-0 sudo[199422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:50 compute-0 python3.9[199424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:50 compute-0 sudo[199422]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:51 compute-0 sudo[199574]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jveyldeeolsxzequojaotgkmeovwhywf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621890.8570125-676-56837253279781/AnsiballZ_file.py'
Dec 01 20:44:51 compute-0 sudo[199574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:51 compute-0 python3.9[199576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:51 compute-0 sudo[199574]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:51 compute-0 ceph-mon[75880]: pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:51 compute-0 sudo[199726]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glojvwilrsjxdrnzxdacpjngvcmbkici ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621891.5925972-676-205381674613152/AnsiballZ_file.py'
Dec 01 20:44:51 compute-0 sudo[199726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:52 compute-0 python3.9[199728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:52 compute-0 sudo[199726]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:52 compute-0 sudo[199878]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iycxcyxnwsksmstyuqzanrhettsbyaiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621892.3250356-676-90084391131685/AnsiballZ_file.py'
Dec 01 20:44:52 compute-0 sudo[199878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:52 compute-0 python3.9[199880]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:52 compute-0 sudo[199878]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:53 compute-0 sudo[200030]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hksartboxsvrmjvxwhwvhkezaqfgmwis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621892.9505467-676-137578077201056/AnsiballZ_file.py'
Dec 01 20:44:53 compute-0 sudo[200030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:53 compute-0 python3.9[200032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:53 compute-0 sudo[200030]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:53 compute-0 ceph-mon[75880]: pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:53 compute-0 sudo[200182]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnymevwlflbmreikyzxtpqfrdryexslp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621893.5843863-676-235245652189152/AnsiballZ_file.py'
Dec 01 20:44:53 compute-0 sudo[200182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:54 compute-0 python3.9[200184]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:54 compute-0 sudo[200182]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:54 compute-0 sudo[200334]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbbrlyffkrpqbmucdexpsjcrlutpppsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621894.238118-676-228939953527069/AnsiballZ_file.py'
Dec 01 20:44:54 compute-0 sudo[200334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:54 compute-0 python3.9[200336]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:54 compute-0 sudo[200334]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:55 compute-0 sudo[200486]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynrymazvgtabmxvndfxkmcgxjcozcqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621894.8127618-676-194237847085628/AnsiballZ_file.py'
Dec 01 20:44:55 compute-0 sudo[200486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:55 compute-0 python3.9[200488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:55 compute-0 sudo[200486]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:55 compute-0 sudo[200638]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audjaeikdmjchalcyminfixwehjfderj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621895.5404954-775-195113137219213/AnsiballZ_stat.py'
Dec 01 20:44:55 compute-0 sudo[200638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:55 compute-0 ceph-mon[75880]: pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:56 compute-0 python3.9[200640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:56 compute-0 sudo[200638]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:56 compute-0 sudo[200761]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plhqmawlcgdwreagxdgmygacnnhdkjyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621895.5404954-775-195113137219213/AnsiballZ_copy.py'
Dec 01 20:44:56 compute-0 sudo[200761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:56 compute-0 python3.9[200763]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621895.5404954-775-195113137219213/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:56 compute-0 sudo[200761]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:56 compute-0 sudo[200913]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtsqwrmxyqhvaclluunwrvtpjjyhlam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621896.65762-775-237576631118878/AnsiballZ_stat.py'
Dec 01 20:44:56 compute-0 sudo[200913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:57 compute-0 python3.9[200915]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:57 compute-0 sudo[200913]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:44:57 compute-0 sudo[201036]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsksxppayejbfetdwzmuzfcwurriccny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621896.65762-775-237576631118878/AnsiballZ_copy.py'
Dec 01 20:44:57 compute-0 sudo[201036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:57 compute-0 python3.9[201038]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621896.65762-775-237576631118878/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:57 compute-0 sudo[201036]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:57 compute-0 ceph-mon[75880]: pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:58 compute-0 sudo[201188]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoelrbznqjsvntgwxzsleagzkquvcpqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621897.7886071-775-266057665370611/AnsiballZ_stat.py'
Dec 01 20:44:58 compute-0 sudo[201188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:58 compute-0 python3.9[201190]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:58 compute-0 sudo[201188]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:58 compute-0 sudo[201311]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deyqdbdxibkfosodozjxinzeuwdttgul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621897.7886071-775-266057665370611/AnsiballZ_copy.py'
Dec 01 20:44:58 compute-0 sudo[201311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:58 compute-0 python3.9[201313]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621897.7886071-775-266057665370611/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:58 compute-0 sudo[201311]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:59 compute-0 sudo[201463]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srmgphtmduqdrbkdhyjiyhcmohpiomws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621898.9493976-775-240602878492025/AnsiballZ_stat.py'
Dec 01 20:44:59 compute-0 sudo[201463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:59 compute-0 python3.9[201465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:44:59 compute-0 sudo[201463]: pam_unix(sudo:session): session closed for user root
Dec 01 20:44:59 compute-0 sudo[201586]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjpqzymoluhxxhoaqpaewnyeynfxwrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621898.9493976-775-240602878492025/AnsiballZ_copy.py'
Dec 01 20:44:59 compute-0 sudo[201586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:44:59 compute-0 ceph-mon[75880]: pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:44:59 compute-0 python3.9[201588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621898.9493976-775-240602878492025/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:44:59 compute-0 sudo[201586]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:00 compute-0 sudo[201738]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tguwthdxfsqknrfvlclukfndyskxkken ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621900.1446416-775-198992129429452/AnsiballZ_stat.py'
Dec 01 20:45:00 compute-0 sudo[201738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:00 compute-0 python3.9[201740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:00 compute-0 sudo[201738]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:00 compute-0 sudo[201861]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rikisembtbwlxewuirlfgrqgjxldozio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621900.1446416-775-198992129429452/AnsiballZ_copy.py'
Dec 01 20:45:00 compute-0 sudo[201861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:01 compute-0 python3.9[201863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621900.1446416-775-198992129429452/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:01 compute-0 sudo[201861]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:01 compute-0 sudo[202013]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsphsbudqabskavzdowkielztsuzyoqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621901.207979-775-32353572977937/AnsiballZ_stat.py'
Dec 01 20:45:01 compute-0 sudo[202013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:01 compute-0 python3.9[202015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:01 compute-0 sudo[202013]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:01 compute-0 ceph-mon[75880]: pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:02 compute-0 sudo[202136]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhsumpogbrujeuieamjvumhfvsqbyftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621901.207979-775-32353572977937/AnsiballZ_copy.py'
Dec 01 20:45:02 compute-0 sudo[202136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:02 compute-0 python3.9[202138]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621901.207979-775-32353572977937/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:02 compute-0 sudo[202136]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:02 compute-0 sudo[202288]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhfcksckxmdxjlvxgoilrkomyyvtsbjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621902.5073876-775-131886300253296/AnsiballZ_stat.py'
Dec 01 20:45:02 compute-0 sudo[202288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:02 compute-0 python3.9[202290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:02 compute-0 sudo[202288]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:03 compute-0 sudo[202411]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jentcrtxtbrnfcyucdmfldkowyjjcquz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621902.5073876-775-131886300253296/AnsiballZ_copy.py'
Dec 01 20:45:03 compute-0 sudo[202411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:45:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:45:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:45:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:45:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:45:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:45:03 compute-0 python3.9[202413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621902.5073876-775-131886300253296/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:03 compute-0 sudo[202411]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:03 compute-0 sudo[202563]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zenoqiqscpcedgpyprbgnfynfkhwudzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621903.5593758-775-58594392670047/AnsiballZ_stat.py'
Dec 01 20:45:03 compute-0 sudo[202563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:03 compute-0 ceph-mon[75880]: pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:03 compute-0 python3.9[202565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:03 compute-0 sudo[202563]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:04 compute-0 sudo[202686]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlmfjqzxzirlfmmfntbwrqveqhpdlkxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621903.5593758-775-58594392670047/AnsiballZ_copy.py'
Dec 01 20:45:04 compute-0 sudo[202686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:04 compute-0 python3.9[202688]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621903.5593758-775-58594392670047/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:04 compute-0 sudo[202686]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:04 compute-0 sudo[202838]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qabhfooylisifdikzsujomcigiwtylhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621904.6314933-775-101582957370678/AnsiballZ_stat.py'
Dec 01 20:45:04 compute-0 sudo[202838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:05 compute-0 python3.9[202840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:05 compute-0 sudo[202838]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:05 compute-0 sudo[202961]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehzxwbhhggrzhaschcdiexlgsfjfvjsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621904.6314933-775-101582957370678/AnsiballZ_copy.py'
Dec 01 20:45:05 compute-0 sudo[202961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:05 compute-0 python3.9[202963]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621904.6314933-775-101582957370678/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:05 compute-0 sudo[202961]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:05 compute-0 ceph-mon[75880]: pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:05 compute-0 sudo[203113]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwmeqiigwuuurmbqquzkgadbkrralgkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621905.7296598-775-25424256546619/AnsiballZ_stat.py'
Dec 01 20:45:05 compute-0 sudo[203113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:06 compute-0 python3.9[203115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:06 compute-0 sudo[203113]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:06 compute-0 sudo[203236]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uphdnebfshubpdavjqmxhtfemzmpouvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621905.7296598-775-25424256546619/AnsiballZ_copy.py'
Dec 01 20:45:06 compute-0 sudo[203236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:06 compute-0 python3.9[203238]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621905.7296598-775-25424256546619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:06 compute-0 sudo[203236]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:07 compute-0 sudo[203388]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyuuieoxrpsombqbvmszpjyvvkzbiwqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621906.8120492-775-8521178205889/AnsiballZ_stat.py'
Dec 01 20:45:07 compute-0 sudo[203388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:07 compute-0 python3.9[203390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:07 compute-0 sudo[203388]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:07 compute-0 sudo[203511]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdghdpafyoalxuakqnwhtbrwcnazvsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621906.8120492-775-8521178205889/AnsiballZ_copy.py'
Dec 01 20:45:07 compute-0 sudo[203511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:07 compute-0 python3.9[203513]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621906.8120492-775-8521178205889/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:07 compute-0 sudo[203511]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:07 compute-0 ceph-mon[75880]: pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:08 compute-0 sudo[203663]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oigdhqchaorrbugsjfocmayqrveccydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621907.8928607-775-166837597031851/AnsiballZ_stat.py'
Dec 01 20:45:08 compute-0 sudo[203663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:08 compute-0 python3.9[203665]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:08 compute-0 sudo[203663]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:08 compute-0 sudo[203786]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojaeyuvxwidjtbrgzifhhzgnbpvahrcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621907.8928607-775-166837597031851/AnsiballZ_copy.py'
Dec 01 20:45:08 compute-0 sudo[203786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:08 compute-0 python3.9[203788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621907.8928607-775-166837597031851/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:08 compute-0 sudo[203786]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:09 compute-0 sudo[203938]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-modpezeovryojndptvorjledhntztcvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621909.1135957-775-175433767671200/AnsiballZ_stat.py'
Dec 01 20:45:09 compute-0 sudo[203938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:09 compute-0 python3.9[203940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:09 compute-0 sudo[203938]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:09 compute-0 ceph-mon[75880]: pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:09 compute-0 sudo[204061]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydtvnwmbxsobqmpxdgmwnfhaxvctcaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621909.1135957-775-175433767671200/AnsiballZ_copy.py'
Dec 01 20:45:09 compute-0 sudo[204061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:10 compute-0 python3.9[204063]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621909.1135957-775-175433767671200/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:10 compute-0 sudo[204061]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:10 compute-0 sudo[204213]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlqmjworzdhtjsvwnzqebywehdjftaaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621910.3780277-775-30689386679651/AnsiballZ_stat.py'
Dec 01 20:45:10 compute-0 sudo[204213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:10 compute-0 python3.9[204215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:10 compute-0 sudo[204213]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:11 compute-0 sudo[204336]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvydxsprovvcjqtwapdxguzmhpnzgpny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621910.3780277-775-30689386679651/AnsiballZ_copy.py'
Dec 01 20:45:11 compute-0 sudo[204336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:11 compute-0 python3.9[204338]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621910.3780277-775-30689386679651/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:11 compute-0 sudo[204336]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:11 compute-0 ceph-mon[75880]: pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:12 compute-0 python3.9[204488]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:45:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:12 compute-0 sudo[204641]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcnqpoxdyzarqxbagwklavqwdsrigexm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621912.3620882-981-252039477026466/AnsiballZ_seboolean.py'
Dec 01 20:45:12 compute-0 sudo[204641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:13 compute-0 python3.9[204643]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 01 20:45:14 compute-0 ceph-mon[75880]: pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:14 compute-0 sudo[204641]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:14 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 01 20:45:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:14 compute-0 sudo[204807]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhnyuaozfdxksjcqumpaxereltyhuuap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621914.402353-989-103003205733174/AnsiballZ_copy.py'
Dec 01 20:45:14 compute-0 sudo[204807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:14 compute-0 podman[204771]: 2025-12-01 20:45:14.781241324 +0000 UTC m=+0.109496856 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:45:14 compute-0 podman[204813]: 2025-12-01 20:45:14.902427438 +0000 UTC m=+0.145968527 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 01 20:45:14 compute-0 python3.9[204816]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:14 compute-0 sudo[204807]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:14 compute-0 auditd[705]: Audit daemon rotating log files
Dec 01 20:45:15 compute-0 sudo[204991]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtrwasojkphlbekluoighvzvrmuncmiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621915.0869796-989-159760073573265/AnsiballZ_copy.py'
Dec 01 20:45:15 compute-0 sudo[204991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:15 compute-0 python3.9[204993]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:15 compute-0 sudo[204991]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:15 compute-0 sudo[205143]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eohonfgkjeikwrksejwcsjfxdzivaxyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621915.6334972-989-193604825055564/AnsiballZ_copy.py'
Dec 01 20:45:15 compute-0 sudo[205143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:16 compute-0 ceph-mon[75880]: pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:16 compute-0 python3.9[205145]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:16 compute-0 sudo[205143]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:16 compute-0 sudo[205295]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbhtscchgqcscltjliyzkhjjceqniqsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621916.2496526-989-98187482203918/AnsiballZ_copy.py'
Dec 01 20:45:16 compute-0 sudo[205295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:16 compute-0 python3.9[205297]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:16 compute-0 sudo[205295]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:17 compute-0 sudo[205447]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlpvlmnryijtisuddjpyvtmzrllrrrvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621916.7708824-989-71933898783696/AnsiballZ_copy.py'
Dec 01 20:45:17 compute-0 sudo[205447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:17 compute-0 python3.9[205449]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:17 compute-0 sudo[205447]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:17 compute-0 sudo[205599]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwkdmubaaqmwzrsgnuwsadtubzlirrbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621917.4951684-1025-150532348549640/AnsiballZ_copy.py'
Dec 01 20:45:17 compute-0 sudo[205599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:17 compute-0 python3.9[205601]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:18 compute-0 sudo[205599]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:18 compute-0 ceph-mon[75880]: pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:18 compute-0 sudo[205751]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzyogfgirtoashcjomcgcceyqnmfhdpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621918.1825106-1025-102893561471938/AnsiballZ_copy.py'
Dec 01 20:45:18 compute-0 sudo[205751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:18 compute-0 python3.9[205753]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:18 compute-0 sudo[205751]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:19 compute-0 sudo[205903]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wscixsxdjakzafogyvfforelsxsjhsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621918.8627875-1025-278313002656364/AnsiballZ_copy.py'
Dec 01 20:45:19 compute-0 sudo[205903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:19 compute-0 python3.9[205905]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:19 compute-0 sudo[205903]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:19 compute-0 sudo[206055]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzjubfieqxowzubwilnpaltwopbozhxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621919.4871905-1025-240816277991313/AnsiballZ_copy.py'
Dec 01 20:45:19 compute-0 sudo[206055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:19 compute-0 python3.9[206057]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:19 compute-0 sudo[206055]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:20 compute-0 ceph-mon[75880]: pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:20 compute-0 sudo[206207]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeayhpdbrocjmlbodxqhqdbbkkafnfvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621920.1037002-1025-21173109988306/AnsiballZ_copy.py'
Dec 01 20:45:20 compute-0 sudo[206207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:20 compute-0 python3.9[206209]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:20 compute-0 sudo[206207]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:21 compute-0 sudo[206359]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlobqptfwubqjkaiorpuekopsnocnnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621920.7865787-1061-209888895450525/AnsiballZ_systemd.py'
Dec 01 20:45:21 compute-0 sudo[206359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:21 compute-0 python3.9[206361]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:45:21 compute-0 systemd[1]: Reloading.
Dec 01 20:45:21 compute-0 systemd-sysv-generator[206391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:21 compute-0 systemd-rc-local-generator[206387]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:21 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 01 20:45:21 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 01 20:45:21 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 01 20:45:21 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 01 20:45:21 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 01 20:45:22 compute-0 ceph-mon[75880]: pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:22 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 01 20:45:22 compute-0 sudo[206359]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:22 compute-0 sudo[206551]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvqguvfuhunpjcglumftebdgokbqyuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621922.360717-1061-158395537013797/AnsiballZ_systemd.py'
Dec 01 20:45:22 compute-0 sudo[206551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:22 compute-0 python3.9[206553]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:45:23 compute-0 systemd[1]: Reloading.
Dec 01 20:45:23 compute-0 systemd-rc-local-generator[206585]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:23 compute-0 systemd-sysv-generator[206588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:23 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 01 20:45:23 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 01 20:45:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 01 20:45:23 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 01 20:45:23 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 01 20:45:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 01 20:45:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 01 20:45:23 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 01 20:45:23 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 01 20:45:23 compute-0 sudo[206551]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:23 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 01 20:45:23 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Dec 01 20:45:23 compute-0 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 01 20:45:23 compute-0 sudo[206776]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbtgkdjudmuaahkaeluuwcsdpuajcsvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621923.6325223-1061-224589632098533/AnsiballZ_systemd.py'
Dec 01 20:45:23 compute-0 sudo[206776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:24 compute-0 ceph-mon[75880]: pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:24 compute-0 python3.9[206778]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:45:24 compute-0 systemd[1]: Reloading.
Dec 01 20:45:24 compute-0 systemd-rc-local-generator[206807]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:24 compute-0 systemd-sysv-generator[206810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:24 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 01 20:45:24 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 01 20:45:24 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 01 20:45:24 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 01 20:45:24 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 01 20:45:24 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 01 20:45:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:24 compute-0 sudo[206776]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:24 compute-0 setroubleshoot[206591]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d2705a9f-dc02-407d-803c-0deec0ae8254
Dec 01 20:45:24 compute-0 setroubleshoot[206591]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 01 20:45:24 compute-0 setroubleshoot[206591]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d2705a9f-dc02-407d-803c-0deec0ae8254
Dec 01 20:45:24 compute-0 setroubleshoot[206591]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 01 20:45:25 compute-0 sudo[206989]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrhrqlischzxtzmrgsgqzrfzshksayiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621924.789706-1061-116538495157649/AnsiballZ_systemd.py'
Dec 01 20:45:25 compute-0 sudo[206989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:25 compute-0 sudo[206992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:45:25 compute-0 sudo[206992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:25 compute-0 sudo[206992]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:25 compute-0 python3.9[206991]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:45:25 compute-0 sudo[207017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:45:25 compute-0 sudo[207017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:25 compute-0 systemd[1]: Reloading.
Dec 01 20:45:25 compute-0 systemd-rc-local-generator[207066]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:25 compute-0 systemd-sysv-generator[207070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:25 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 01 20:45:25 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 01 20:45:25 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 01 20:45:25 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 01 20:45:25 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 01 20:45:25 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 01 20:45:25 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 01 20:45:25 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 01 20:45:25 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 01 20:45:25 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 01 20:45:25 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 01 20:45:25 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 01 20:45:25 compute-0 sudo[206989]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:25 compute-0 sudo[207017]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:45:25 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:45:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:45:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:45:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:45:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:45:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:45:25 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:45:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:45:25 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:45:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:45:25 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:45:25 compute-0 sudo[207183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:45:25 compute-0 sudo[207183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:25 compute-0 sudo[207183]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:26 compute-0 sudo[207237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:45:26 compute-0 sudo[207237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:26 compute-0 ceph-mon[75880]: pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:45:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:45:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:45:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:45:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:45:26 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:45:26 compute-0 podman[207323]: 2025-12-01 20:45:26.299085243 +0000 UTC m=+0.035611433 container create 0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:45:26 compute-0 sudo[207360]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzmxuhufbsjrccyxfiijkwzjpvmpbzqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621925.913479-1061-59940332711708/AnsiballZ_systemd.py'
Dec 01 20:45:26 compute-0 sudo[207360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:26 compute-0 systemd[1]: Started libpod-conmon-0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710.scope.
Dec 01 20:45:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:45:26 compute-0 podman[207323]: 2025-12-01 20:45:26.283924757 +0000 UTC m=+0.020450967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:45:26 compute-0 podman[207323]: 2025-12-01 20:45:26.384376549 +0000 UTC m=+0.120902759 container init 0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:45:26 compute-0 podman[207323]: 2025-12-01 20:45:26.390946081 +0000 UTC m=+0.127472271 container start 0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:45:26 compute-0 podman[207323]: 2025-12-01 20:45:26.393871373 +0000 UTC m=+0.130397593 container attach 0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jepsen, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:45:26 compute-0 blissful_jepsen[207367]: 167 167
Dec 01 20:45:26 compute-0 systemd[1]: libpod-0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710.scope: Deactivated successfully.
Dec 01 20:45:26 compute-0 conmon[207367]: conmon 0fe22194421be1c993d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710.scope/container/memory.events
Dec 01 20:45:26 compute-0 podman[207323]: 2025-12-01 20:45:26.396940102 +0000 UTC m=+0.133466322 container died 0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 01 20:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4e74fd7b7412f5ee4854554b6e575c49c4eca6d314c582874c4d0f19628ca77-merged.mount: Deactivated successfully.
Dec 01 20:45:26 compute-0 podman[207323]: 2025-12-01 20:45:26.442573096 +0000 UTC m=+0.179099286 container remove 0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:45:26 compute-0 systemd[1]: libpod-conmon-0fe22194421be1c993d2efb37ab3effdd71b58610120d4bd4d18b23926b29710.scope: Deactivated successfully.
Dec 01 20:45:26 compute-0 podman[207392]: 2025-12-01 20:45:26.595200733 +0000 UTC m=+0.037468653 container create 199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_dijkstra, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 20:45:26 compute-0 python3.9[207364]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:45:26 compute-0 systemd[1]: Reloading.
Dec 01 20:45:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:26 compute-0 podman[207392]: 2025-12-01 20:45:26.578434635 +0000 UTC m=+0.020702545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:45:26 compute-0 systemd-rc-local-generator[207433]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:26 compute-0 systemd-sysv-generator[207436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:26 compute-0 systemd[1]: Started libpod-conmon-199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183.scope.
Dec 01 20:45:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868c7baaaac7568e47288015cea9a3d36d0f26d7460924f7678c3b5c045add11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868c7baaaac7568e47288015cea9a3d36d0f26d7460924f7678c3b5c045add11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868c7baaaac7568e47288015cea9a3d36d0f26d7460924f7678c3b5c045add11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868c7baaaac7568e47288015cea9a3d36d0f26d7460924f7678c3b5c045add11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868c7baaaac7568e47288015cea9a3d36d0f26d7460924f7678c3b5c045add11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:26 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 01 20:45:26 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 01 20:45:26 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 01 20:45:26 compute-0 podman[207392]: 2025-12-01 20:45:26.989714509 +0000 UTC m=+0.431982429 container init 199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:45:26 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 01 20:45:26 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 01 20:45:26 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 01 20:45:27 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 01 20:45:27 compute-0 podman[207392]: 2025-12-01 20:45:27.007594572 +0000 UTC m=+0.449862472 container start 199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 01 20:45:27 compute-0 podman[207392]: 2025-12-01 20:45:27.010777344 +0000 UTC m=+0.453045244 container attach 199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_dijkstra, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:45:27 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 01 20:45:27 compute-0 sudo[207360]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:27 compute-0 nervous_dijkstra[207444]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:45:27 compute-0 nervous_dijkstra[207444]: --> All data devices are unavailable
Dec 01 20:45:27 compute-0 podman[207392]: 2025-12-01 20:45:27.526992154 +0000 UTC m=+0.969260064 container died 199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_dijkstra, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 01 20:45:27 compute-0 systemd[1]: libpod-199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183.scope: Deactivated successfully.
Dec 01 20:45:27 compute-0 sudo[207637]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibkdxskwhyudxbwgmrlqvmfmqycetjia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621927.260872-1098-271835363234197/AnsiballZ_file.py'
Dec 01 20:45:27 compute-0 sudo[207637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-868c7baaaac7568e47288015cea9a3d36d0f26d7460924f7678c3b5c045add11-merged.mount: Deactivated successfully.
Dec 01 20:45:27 compute-0 podman[207392]: 2025-12-01 20:45:27.569629321 +0000 UTC m=+1.011897221 container remove 199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:45:27 compute-0 systemd[1]: libpod-conmon-199f1335b3264e988f57c361283c90bc53b4ab5f78ef806d3bb88033bb2d0183.scope: Deactivated successfully.
Dec 01 20:45:27 compute-0 sudo[207237]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:27 compute-0 sudo[207652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:45:27 compute-0 sudo[207652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:27 compute-0 sudo[207652]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:27 compute-0 sudo[207677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:45:27 compute-0 sudo[207677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:27 compute-0 python3.9[207647]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:27 compute-0 sudo[207637]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:27 compute-0 podman[207771]: 2025-12-01 20:45:27.999444171 +0000 UTC m=+0.036667268 container create 6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 01 20:45:28 compute-0 systemd[1]: Started libpod-conmon-6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d.scope.
Dec 01 20:45:28 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:45:28 compute-0 ceph-mon[75880]: pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:28 compute-0 podman[207771]: 2025-12-01 20:45:27.983942383 +0000 UTC m=+0.021165500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:45:28 compute-0 podman[207771]: 2025-12-01 20:45:28.080576233 +0000 UTC m=+0.117799350 container init 6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:45:28 compute-0 podman[207771]: 2025-12-01 20:45:28.088539049 +0000 UTC m=+0.125762146 container start 6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:45:28 compute-0 podman[207771]: 2025-12-01 20:45:28.091772842 +0000 UTC m=+0.128996019 container attach 6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:45:28 compute-0 competent_pike[207824]: 167 167
Dec 01 20:45:28 compute-0 systemd[1]: libpod-6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d.scope: Deactivated successfully.
Dec 01 20:45:28 compute-0 podman[207771]: 2025-12-01 20:45:28.096709311 +0000 UTC m=+0.133932408 container died 6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 01 20:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5c0faaed251e1da397aacde462c1072bc6bb31ef0e8f54a405db4c874b9e33c-merged.mount: Deactivated successfully.
Dec 01 20:45:28 compute-0 podman[207771]: 2025-12-01 20:45:28.134636858 +0000 UTC m=+0.171859965 container remove 6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:45:28 compute-0 systemd[1]: libpod-conmon-6907d72283f2e3d0a514d02145c7197ac7ec36f326444b40c0103e749473fd5d.scope: Deactivated successfully.
Dec 01 20:45:28 compute-0 sudo[207895]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwhonnhooknhwfgkhksyyjauvdaaczol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621927.9173014-1106-180020932371636/AnsiballZ_find.py'
Dec 01 20:45:28 compute-0 sudo[207895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:28 compute-0 podman[207903]: 2025-12-01 20:45:28.302002867 +0000 UTC m=+0.039347954 container create fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:45:28 compute-0 systemd[1]: Started libpod-conmon-fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992.scope.
Dec 01 20:45:28 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af204f3e63c85e05c44a4913852cbd105e5843e5e40a0cd0036067b1c9187c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:28 compute-0 podman[207903]: 2025-12-01 20:45:28.283456952 +0000 UTC m=+0.020802059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af204f3e63c85e05c44a4913852cbd105e5843e5e40a0cd0036067b1c9187c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af204f3e63c85e05c44a4913852cbd105e5843e5e40a0cd0036067b1c9187c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af204f3e63c85e05c44a4913852cbd105e5843e5e40a0cd0036067b1c9187c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:28 compute-0 podman[207903]: 2025-12-01 20:45:28.387282452 +0000 UTC m=+0.124627569 container init fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hopper, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:45:28 compute-0 podman[207903]: 2025-12-01 20:45:28.395476985 +0000 UTC m=+0.132822072 container start fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:45:28 compute-0 podman[207903]: 2025-12-01 20:45:28.398333606 +0000 UTC m=+0.135678713 container attach fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:45:28 compute-0 python3.9[207897]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 01 20:45:28 compute-0 sudo[207895]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:28 compute-0 great_hopper[207920]: {
Dec 01 20:45:28 compute-0 great_hopper[207920]:     "0": [
Dec 01 20:45:28 compute-0 great_hopper[207920]:         {
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "devices": [
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "/dev/loop3"
Dec 01 20:45:28 compute-0 great_hopper[207920]:             ],
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_name": "ceph_lv0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_size": "21470642176",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "name": "ceph_lv0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "tags": {
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cluster_name": "ceph",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.crush_device_class": "",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.encrypted": "0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.objectstore": "bluestore",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osd_id": "0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.type": "block",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.vdo": "0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.with_tpm": "0"
Dec 01 20:45:28 compute-0 great_hopper[207920]:             },
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "type": "block",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "vg_name": "ceph_vg0"
Dec 01 20:45:28 compute-0 great_hopper[207920]:         }
Dec 01 20:45:28 compute-0 great_hopper[207920]:     ],
Dec 01 20:45:28 compute-0 great_hopper[207920]:     "1": [
Dec 01 20:45:28 compute-0 great_hopper[207920]:         {
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "devices": [
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "/dev/loop4"
Dec 01 20:45:28 compute-0 great_hopper[207920]:             ],
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_name": "ceph_lv1",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_size": "21470642176",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "name": "ceph_lv1",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "tags": {
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cluster_name": "ceph",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.crush_device_class": "",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.encrypted": "0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.objectstore": "bluestore",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osd_id": "1",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.type": "block",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.vdo": "0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.with_tpm": "0"
Dec 01 20:45:28 compute-0 great_hopper[207920]:             },
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "type": "block",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "vg_name": "ceph_vg1"
Dec 01 20:45:28 compute-0 great_hopper[207920]:         }
Dec 01 20:45:28 compute-0 great_hopper[207920]:     ],
Dec 01 20:45:28 compute-0 great_hopper[207920]:     "2": [
Dec 01 20:45:28 compute-0 great_hopper[207920]:         {
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "devices": [
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "/dev/loop5"
Dec 01 20:45:28 compute-0 great_hopper[207920]:             ],
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_name": "ceph_lv2",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_size": "21470642176",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "name": "ceph_lv2",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "tags": {
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.cluster_name": "ceph",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.crush_device_class": "",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.encrypted": "0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.objectstore": "bluestore",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osd_id": "2",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.type": "block",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.vdo": "0",
Dec 01 20:45:28 compute-0 great_hopper[207920]:                 "ceph.with_tpm": "0"
Dec 01 20:45:28 compute-0 great_hopper[207920]:             },
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "type": "block",
Dec 01 20:45:28 compute-0 great_hopper[207920]:             "vg_name": "ceph_vg2"
Dec 01 20:45:28 compute-0 great_hopper[207920]:         }
Dec 01 20:45:28 compute-0 great_hopper[207920]:     ]
Dec 01 20:45:28 compute-0 great_hopper[207920]: }
Dec 01 20:45:28 compute-0 systemd[1]: libpod-fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992.scope: Deactivated successfully.
Dec 01 20:45:28 compute-0 podman[207903]: 2025-12-01 20:45:28.700682476 +0000 UTC m=+0.438027573 container died fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-af204f3e63c85e05c44a4913852cbd105e5843e5e40a0cd0036067b1c9187c93-merged.mount: Deactivated successfully.
Dec 01 20:45:28 compute-0 podman[207903]: 2025-12-01 20:45:28.740305157 +0000 UTC m=+0.477650244 container remove fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:45:28 compute-0 systemd[1]: libpod-conmon-fc1ee18109364bb5545563b5bdbabe5934a94354888e66fd96747f51cfa9c992.scope: Deactivated successfully.
Dec 01 20:45:28 compute-0 sudo[207677]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:28 compute-0 sudo[208039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:45:28 compute-0 sudo[208039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:28 compute-0 sudo[208039]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:28 compute-0 sudo[208088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:45:28 compute-0 sudo[208088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:28 compute-0 sudo[208136]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regscennohusrozmnwguyqrulqzqlbyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621928.6083248-1114-24568599282729/AnsiballZ_command.py'
Dec 01 20:45:28 compute-0 sudo[208136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:29 compute-0 python3.9[208140]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:45:29 compute-0 sudo[208136]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:29 compute-0 podman[208156]: 2025-12-01 20:45:29.157141879 +0000 UTC m=+0.037659599 container create bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:45:29 compute-0 systemd[1]: Started libpod-conmon-bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d.scope.
Dec 01 20:45:29 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:45:29 compute-0 podman[208156]: 2025-12-01 20:45:29.14066729 +0000 UTC m=+0.021185070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:45:29 compute-0 podman[208156]: 2025-12-01 20:45:29.257138327 +0000 UTC m=+0.137656077 container init bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:45:29 compute-0 podman[208156]: 2025-12-01 20:45:29.264784902 +0000 UTC m=+0.145302622 container start bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mendel, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:45:29 compute-0 podman[208156]: 2025-12-01 20:45:29.267601263 +0000 UTC m=+0.148118983 container attach bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:45:29 compute-0 peaceful_mendel[208198]: 167 167
Dec 01 20:45:29 compute-0 systemd[1]: libpod-bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d.scope: Deactivated successfully.
Dec 01 20:45:29 compute-0 conmon[208198]: conmon bb247228bd9c0235cafe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d.scope/container/memory.events
Dec 01 20:45:29 compute-0 podman[208156]: 2025-12-01 20:45:29.272022294 +0000 UTC m=+0.152540044 container died bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mendel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:45:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0a64bc07dbc4e9b162847ee8ee7ea570c28627518550ded3dffeb2bfc9505f8-merged.mount: Deactivated successfully.
Dec 01 20:45:29 compute-0 podman[208156]: 2025-12-01 20:45:29.307158922 +0000 UTC m=+0.187676642 container remove bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:45:29 compute-0 systemd[1]: libpod-conmon-bb247228bd9c0235cafe55ca8e5905c130c87189797233a92951d976a261635d.scope: Deactivated successfully.
Dec 01 20:45:29 compute-0 podman[208267]: 2025-12-01 20:45:29.485594886 +0000 UTC m=+0.052660990 container create f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:45:29 compute-0 systemd[1]: Started libpod-conmon-f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934.scope.
Dec 01 20:45:29 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c49b7192bdabd3e135176ae33b74c31806882739ccd04de577759a3e8faf58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c49b7192bdabd3e135176ae33b74c31806882739ccd04de577759a3e8faf58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c49b7192bdabd3e135176ae33b74c31806882739ccd04de577759a3e8faf58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c49b7192bdabd3e135176ae33b74c31806882739ccd04de577759a3e8faf58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:45:29 compute-0 podman[208267]: 2025-12-01 20:45:29.460753789 +0000 UTC m=+0.027819943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:45:29 compute-0 podman[208267]: 2025-12-01 20:45:29.561739878 +0000 UTC m=+0.128806002 container init f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:45:29 compute-0 podman[208267]: 2025-12-01 20:45:29.567485343 +0000 UTC m=+0.134551447 container start f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:45:29 compute-0 podman[208267]: 2025-12-01 20:45:29.571384337 +0000 UTC m=+0.138450441 container attach f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:45:29 compute-0 python3.9[208368]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 01 20:45:30 compute-0 ceph-mon[75880]: pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:30 compute-0 lvm[208539]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:45:30 compute-0 lvm[208542]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:45:30 compute-0 lvm[208539]: VG ceph_vg0 finished
Dec 01 20:45:30 compute-0 lvm[208542]: VG ceph_vg1 finished
Dec 01 20:45:30 compute-0 lvm[208544]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:45:30 compute-0 lvm[208544]: VG ceph_vg2 finished
Dec 01 20:45:30 compute-0 lvm[208551]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:45:30 compute-0 lvm[208551]: VG ceph_vg2 finished
Dec 01 20:45:30 compute-0 lvm[208580]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:45:30 compute-0 lvm[208580]: VG ceph_vg2 finished
Dec 01 20:45:30 compute-0 xenodochial_euclid[208314]: {}
Dec 01 20:45:30 compute-0 systemd[1]: libpod-f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934.scope: Deactivated successfully.
Dec 01 20:45:30 compute-0 systemd[1]: libpod-f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934.scope: Consumed 1.344s CPU time.
Dec 01 20:45:30 compute-0 podman[208267]: 2025-12-01 20:45:30.38692321 +0000 UTC m=+0.953989314 container died f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 01 20:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c49b7192bdabd3e135176ae33b74c31806882739ccd04de577759a3e8faf58-merged.mount: Deactivated successfully.
Dec 01 20:45:30 compute-0 podman[208267]: 2025-12-01 20:45:30.427896645 +0000 UTC m=+0.994962749 container remove f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:45:30 compute-0 systemd[1]: libpod-conmon-f0ef83c3688a540f7b17e6cfd6bb1c8a0f5fdb1f684841f0e518fdf7fde14934.scope: Deactivated successfully.
Dec 01 20:45:30 compute-0 sudo[208088]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:45:30 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:45:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:45:30 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:45:30 compute-0 sudo[208611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:45:30 compute-0 sudo[208611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:45:30 compute-0 sudo[208611]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:30 compute-0 python3.9[208599]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:31 compute-0 python3.9[208756]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621930.1067045-1133-257465931780450/.source.xml follow=False _original_basename=secret.xml.j2 checksum=36f7b8cb0ff0e55a3e9b2535e7085009193fe584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:45:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:45:31 compute-0 ceph-mon[75880]: pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:31 compute-0 sudo[208906]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfxlrghkusrhplrnorroclbbkbpvoged ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621931.2208073-1148-170586036376444/AnsiballZ_command.py'
Dec 01 20:45:31 compute-0 sudo[208906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:31 compute-0 python3.9[208908]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine dcf60a89-bba0-58b0-a1bf-d4bde723201b
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:45:31 compute-0 polkitd[44197]: Registered Authentication Agent for unix-process:208910:321740 (system bus name :1.2504 [pkttyagent --process 208910 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 01 20:45:31 compute-0 polkitd[44197]: Unregistered Authentication Agent for unix-process:208910:321740 (system bus name :1.2504, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 01 20:45:31 compute-0 polkitd[44197]: Registered Authentication Agent for unix-process:208909:321739 (system bus name :1.2505 [pkttyagent --process 208909 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 01 20:45:31 compute-0 polkitd[44197]: Unregistered Authentication Agent for unix-process:208909:321739 (system bus name :1.2505, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 01 20:45:31 compute-0 sudo[208906]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:32 compute-0 python3.9[209070]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:45:32
Dec 01 20:45:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:45:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:45:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['images', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Dec 01 20:45:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:45:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:32 compute-0 sudo[209220]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrzbjiqrsltdvoihxqjigdhkelcfpun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621932.4970818-1164-208027491951890/AnsiballZ_command.py'
Dec 01 20:45:32 compute-0 sudo[209220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:32 compute-0 sudo[209220]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:45:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:45:33 compute-0 sudo[209373]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkvnfotzqfatudiemgultakxyeedfmbq ; FSID=dcf60a89-bba0-58b0-a1bf-d4bde723201b KEY=AQDp+i1pAAAAABAApJ092b/18HbtI+y6dgTdfg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621933.1340106-1172-109487695226551/AnsiballZ_command.py'
Dec 01 20:45:33 compute-0 sudo[209373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:33 compute-0 polkitd[44197]: Registered Authentication Agent for unix-process:209376:321928 (system bus name :1.2508 [pkttyagent --process 209376 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 01 20:45:33 compute-0 polkitd[44197]: Unregistered Authentication Agent for unix-process:209376:321928 (system bus name :1.2508, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 01 20:45:33 compute-0 sudo[209373]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:33 compute-0 ceph-mon[75880]: pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:34 compute-0 sudo[209531]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chdjyslwhmylbceafkmijgpcargtwrfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621933.8533323-1180-239321835176951/AnsiballZ_copy.py'
Dec 01 20:45:34 compute-0 sudo[209531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:34 compute-0 python3.9[209533]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:34 compute-0 sudo[209531]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:34 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 01 20:45:34 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 01 20:45:34 compute-0 sudo[209683]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjqqmrcrocycvvgpyynqbafjrhbcvuip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621934.5717986-1188-130521788462073/AnsiballZ_stat.py'
Dec 01 20:45:34 compute-0 sudo[209683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:35 compute-0 python3.9[209685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:35 compute-0 sudo[209683]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:35 compute-0 sudo[209806]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utxarqmuykyhcwamydxrycqgycvdkzis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621934.5717986-1188-130521788462073/AnsiballZ_copy.py'
Dec 01 20:45:35 compute-0 sudo[209806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:35 compute-0 python3.9[209808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621934.5717986-1188-130521788462073/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:35 compute-0 sudo[209806]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:35 compute-0 ceph-mon[75880]: pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:36 compute-0 sudo[209958]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpofzmrwcvjzbyrgmcdxxjwpuurninrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621935.8784242-1204-91218921286840/AnsiballZ_file.py'
Dec 01 20:45:36 compute-0 sudo[209958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:36 compute-0 python3.9[209960]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:36 compute-0 sudo[209958]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:36 compute-0 sudo[210110]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhyusyzeyatjzexxcqjdhtlffrtaemum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621936.4520857-1212-80325889401056/AnsiballZ_stat.py'
Dec 01 20:45:36 compute-0 sudo[210110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:36 compute-0 python3.9[210112]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:36 compute-0 sudo[210110]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:37 compute-0 sudo[210188]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-filcloczmfktynupkgrumcgyopqpthog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621936.4520857-1212-80325889401056/AnsiballZ_file.py'
Dec 01 20:45:37 compute-0 sudo[210188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:37 compute-0 python3.9[210190]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:37 compute-0 sudo[210188]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:37 compute-0 sudo[210340]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfjiuqcortlkdehhxnohqyaezrpkltrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621937.4092135-1224-71550939660601/AnsiballZ_stat.py'
Dec 01 20:45:37 compute-0 sudo[210340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:37 compute-0 ceph-mon[75880]: pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:37 compute-0 python3.9[210342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:37 compute-0 sudo[210340]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:38 compute-0 sudo[210418]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzgirajgebxczwmjnegtvenyhvmsanuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621937.4092135-1224-71550939660601/AnsiballZ_file.py'
Dec 01 20:45:38 compute-0 sudo[210418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:38 compute-0 python3.9[210420]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.i0qcguc7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:38 compute-0 sudo[210418]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:38 compute-0 sudo[210570]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-corcnfefsdgvsradxtnnnywjvacsqkqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621938.5073433-1236-86177864862909/AnsiballZ_stat.py'
Dec 01 20:45:38 compute-0 sudo[210570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:38 compute-0 python3.9[210572]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:38 compute-0 sudo[210570]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:39 compute-0 sudo[210648]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxmykbjnuoacnesjfyanopjirlicueif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621938.5073433-1236-86177864862909/AnsiballZ_file.py'
Dec 01 20:45:39 compute-0 sudo[210648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:39 compute-0 python3.9[210650]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:39 compute-0 sudo[210648]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:39 compute-0 ceph-mon[75880]: pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:39 compute-0 sudo[210800]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwvdfudrmzoldghtoxguwidqsnmgipz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621939.5940576-1249-227070540447763/AnsiballZ_command.py'
Dec 01 20:45:39 compute-0 sudo[210800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:40 compute-0 python3.9[210802]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:45:40 compute-0 sudo[210800]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:45:40 compute-0 sudo[210953]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbblmczvhxxcdcqudseembggehgaurfz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764621940.1909099-1257-23648063403945/AnsiballZ_edpm_nftables_from_files.py'
Dec 01 20:45:40 compute-0 sudo[210953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:40 compute-0 python3[210955]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 01 20:45:40 compute-0 sudo[210953]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:41 compute-0 sudo[211105]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvcawmxeergfhmqkudeouxkbvdxljme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621940.9525046-1265-195435831631248/AnsiballZ_stat.py'
Dec 01 20:45:41 compute-0 sudo[211105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:41 compute-0 python3.9[211107]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:41 compute-0 sudo[211105]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:41 compute-0 sudo[211183]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxxjroobydbcxciodkwsuufhjinvlph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621940.9525046-1265-195435831631248/AnsiballZ_file.py'
Dec 01 20:45:41 compute-0 sudo[211183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:41 compute-0 ceph-mon[75880]: pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:41 compute-0 python3.9[211185]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:41 compute-0 sudo[211183]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:42 compute-0 sudo[211335]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnzigybeuwrguygxfspjstsionjkdxyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621942.1081486-1277-149404498701348/AnsiballZ_stat.py'
Dec 01 20:45:42 compute-0 sudo[211335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:42 compute-0 python3.9[211337]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:42 compute-0 sudo[211335]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:42 compute-0 sudo[211413]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqrdjwcyhipnbxzuszzrbdpmpqyyijg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621942.1081486-1277-149404498701348/AnsiballZ_file.py'
Dec 01 20:45:42 compute-0 sudo[211413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:43 compute-0 python3.9[211415]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:43 compute-0 sudo[211413]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:43 compute-0 ceph-mon[75880]: pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:43 compute-0 sudo[211565]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbmeyuhcvbtslkpqiyjcpitlaagzkhbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621943.3776493-1289-8426968792530/AnsiballZ_stat.py'
Dec 01 20:45:43 compute-0 sudo[211565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:43 compute-0 python3.9[211567]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:44 compute-0 sudo[211565]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:44 compute-0 sudo[211643]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzbdqrslkmcmkdknushglacxeuesele ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621943.3776493-1289-8426968792530/AnsiballZ_file.py'
Dec 01 20:45:44 compute-0 sudo[211643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:45:44.344 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:45:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:45:44.345 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:45:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:45:44.345 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:45:44 compute-0 python3.9[211645]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:44 compute-0 sudo[211643]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:44 compute-0 sudo[211795]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddhxpmzyaeobwsczgoazhuyghujquib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621944.567724-1301-33374669361681/AnsiballZ_stat.py'
Dec 01 20:45:44 compute-0 sudo[211795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:44 compute-0 podman[211797]: 2025-12-01 20:45:44.908874661 +0000 UTC m=+0.047441463 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:45:45 compute-0 python3.9[211798]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:45 compute-0 sudo[211795]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:45 compute-0 podman[211817]: 2025-12-01 20:45:45.13704258 +0000 UTC m=+0.098816790 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 01 20:45:45 compute-0 sudo[211917]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imugozjbwroqydtvtrokrpvgzlqznodb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621944.567724-1301-33374669361681/AnsiballZ_file.py'
Dec 01 20:45:45 compute-0 sudo[211917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:45 compute-0 python3.9[211919]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:45 compute-0 sudo[211917]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:45 compute-0 ceph-mon[75880]: pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:45 compute-0 sudo[212069]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucsexyrdaqzgrqsqptgxrdkytamqowch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621945.6109004-1313-32325088650693/AnsiballZ_stat.py'
Dec 01 20:45:45 compute-0 sudo[212069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:46 compute-0 python3.9[212071]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:46 compute-0 sudo[212069]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:46 compute-0 sudo[212194]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkgkkukiyrxxidptaulesaucxfnrcgth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621945.6109004-1313-32325088650693/AnsiballZ_copy.py'
Dec 01 20:45:46 compute-0 sudo[212194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:46 compute-0 python3.9[212196]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764621945.6109004-1313-32325088650693/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:46 compute-0 sudo[212194]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:47 compute-0 sudo[212346]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpfxzcktsujmfqhlewagvaiwwukhxknl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621946.8400235-1328-67684135502626/AnsiballZ_file.py'
Dec 01 20:45:47 compute-0 sudo[212346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:47 compute-0 python3.9[212348]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:47 compute-0 sudo[212346]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:47 compute-0 sudo[212498]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btwdefpmhitbsupsnciuwfxpuzdufqae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621947.4671643-1336-554549249258/AnsiballZ_command.py'
Dec 01 20:45:47 compute-0 sudo[212498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:47 compute-0 ceph-mon[75880]: pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:47 compute-0 python3.9[212500]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:45:47 compute-0 sudo[212498]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:48 compute-0 sudo[212653]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuywaxsxbyrtrsyzfkpsfxlnwphiemyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621948.098528-1344-10724666752711/AnsiballZ_blockinfile.py'
Dec 01 20:45:48 compute-0 sudo[212653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:48 compute-0 python3.9[212655]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:48 compute-0 sudo[212653]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:49 compute-0 sudo[212805]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzvtczfiwdteokavemqjoskiezecbyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621948.9371567-1353-132126248443188/AnsiballZ_command.py'
Dec 01 20:45:49 compute-0 sudo[212805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:49 compute-0 python3.9[212807]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:45:49 compute-0 sudo[212805]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:49 compute-0 sudo[212958]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqmskrolpekwmpskmjxvdzpbsjqpqray ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621949.5447319-1361-186220456616374/AnsiballZ_stat.py'
Dec 01 20:45:49 compute-0 sudo[212958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:49 compute-0 ceph-mon[75880]: pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:49 compute-0 python3.9[212960]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:45:49 compute-0 sudo[212958]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:50 compute-0 sudo[213112]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwhtlqsvltdvkljwajpyvdqupanvafxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621950.0962498-1369-223337260555103/AnsiballZ_command.py'
Dec 01 20:45:50 compute-0 sudo[213112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:50 compute-0 python3.9[213114]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:45:50 compute-0 sudo[213112]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:51 compute-0 sudo[213267]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnlzacgaoxdmbtobbnbqywlncygpawaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621950.7298663-1377-134439705303525/AnsiballZ_file.py'
Dec 01 20:45:51 compute-0 sudo[213267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:51 compute-0 python3.9[213269]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:51 compute-0 sudo[213267]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:51 compute-0 sudo[213419]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfdhjjibjimnchuwtopzgjbrrfsnrxce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621951.4568024-1385-172779068881872/AnsiballZ_stat.py'
Dec 01 20:45:51 compute-0 sudo[213419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:51 compute-0 ceph-mon[75880]: pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:51 compute-0 python3.9[213421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:51 compute-0 sudo[213419]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:52 compute-0 sudo[213542]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnbtompfsicllqzrgxuuntytbeuepmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621951.4568024-1385-172779068881872/AnsiballZ_copy.py'
Dec 01 20:45:52 compute-0 sudo[213542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:52 compute-0 python3.9[213544]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621951.4568024-1385-172779068881872/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:52 compute-0 sudo[213542]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:52 compute-0 sudo[213694]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwvklpjtgnfbmyzzvtwrfhkxomuunukg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621952.6031065-1400-15967789542310/AnsiballZ_stat.py'
Dec 01 20:45:52 compute-0 sudo[213694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:53 compute-0 python3.9[213696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:53 compute-0 sudo[213694]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:53 compute-0 sudo[213817]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqrvjivkwbldgxartowmnyarqgynezdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621952.6031065-1400-15967789542310/AnsiballZ_copy.py'
Dec 01 20:45:53 compute-0 sudo[213817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:53 compute-0 python3.9[213819]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621952.6031065-1400-15967789542310/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:53 compute-0 sudo[213817]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:53 compute-0 ceph-mon[75880]: pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:54 compute-0 sudo[213969]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptcfbzkfzolviswpnvtremddshvtwuto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621953.949936-1415-96355101150974/AnsiballZ_stat.py'
Dec 01 20:45:54 compute-0 sudo[213969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:54 compute-0 python3.9[213971]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:45:54 compute-0 sudo[213969]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:54 compute-0 sudo[214092]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yigendeeensmusvnghdtwhwuwsadjiyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621953.949936-1415-96355101150974/AnsiballZ_copy.py'
Dec 01 20:45:54 compute-0 sudo[214092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:54 compute-0 python3.9[214094]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621953.949936-1415-96355101150974/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:45:55 compute-0 sudo[214092]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:55 compute-0 sudo[214244]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxhljckuzqceybtszlxuwemzgvutwde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621955.1722496-1430-140138923591190/AnsiballZ_systemd.py'
Dec 01 20:45:55 compute-0 sudo[214244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:55 compute-0 python3.9[214246]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:45:55 compute-0 systemd[1]: Reloading.
Dec 01 20:45:55 compute-0 systemd-rc-local-generator[214274]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:55 compute-0 systemd-sysv-generator[214277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:55 compute-0 ceph-mon[75880]: pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:56 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 01 20:45:56 compute-0 sudo[214244]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:56 compute-0 sudo[214435]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laycqabpcensnfbdjfhcxwbzsnjkhhzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621956.2678287-1438-69790802376473/AnsiballZ_systemd.py'
Dec 01 20:45:56 compute-0 sudo[214435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:45:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:56 compute-0 python3.9[214437]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 01 20:45:56 compute-0 systemd[1]: Reloading.
Dec 01 20:45:56 compute-0 systemd-rc-local-generator[214463]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:56 compute-0 systemd-sysv-generator[214467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:45:57 compute-0 ceph-mon[75880]: pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:58 compute-0 systemd[1]: Reloading.
Dec 01 20:45:58 compute-0 systemd-rc-local-generator[214499]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:45:58 compute-0 systemd-sysv-generator[214506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:45:58 compute-0 sudo[214435]: pam_unix(sudo:session): session closed for user root
Dec 01 20:45:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:45:58 compute-0 sshd-session[155990]: Connection closed by 192.168.122.30 port 50590
Dec 01 20:45:58 compute-0 sshd-session[155987]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:45:58 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 01 20:45:58 compute-0 systemd[1]: session-50.scope: Consumed 3min 19.082s CPU time.
Dec 01 20:45:58 compute-0 systemd-logind[796]: Session 50 logged out. Waiting for processes to exit.
Dec 01 20:45:58 compute-0 systemd-logind[796]: Removed session 50.
Dec 01 20:45:59 compute-0 ceph-mon[75880]: pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:01 compute-0 anacron[7544]: Job `cron.monthly' started
Dec 01 20:46:01 compute-0 anacron[7544]: Job `cron.monthly' terminated
Dec 01 20:46:01 compute-0 anacron[7544]: Normal exit (3 jobs run)
Dec 01 20:46:01 compute-0 ceph-mon[75880]: pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:46:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:46:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:46:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:46:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:46:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:46:03 compute-0 ceph-mon[75880]: pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:04 compute-0 sshd-session[214536]: Accepted publickey for zuul from 192.168.122.30 port 33788 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:46:04 compute-0 systemd-logind[796]: New session 51 of user zuul.
Dec 01 20:46:04 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 01 20:46:04 compute-0 sshd-session[214536]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:46:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:05 compute-0 python3.9[214689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:46:05 compute-0 ceph-mon[75880]: pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:06 compute-0 python3.9[214843]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:46:06 compute-0 network[214860]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:46:06 compute-0 network[214861]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:46:06 compute-0 network[214862]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:46:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:08 compute-0 ceph-mon[75880]: pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:10 compute-0 ceph-mon[75880]: pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:10 compute-0 sudo[215132]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ostqkfzzhgnpkuxsqakzltjdvrvifktx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621970.1406875-47-250790866364780/AnsiballZ_setup.py'
Dec 01 20:46:10 compute-0 sudo[215132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:10 compute-0 python3.9[215134]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 01 20:46:11 compute-0 sudo[215132]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:11 compute-0 sudo[215216]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcllencnzwimwskzuckjpnxcrnndiffx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621970.1406875-47-250790866364780/AnsiballZ_dnf.py'
Dec 01 20:46:11 compute-0 sudo[215216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:11 compute-0 python3.9[215218]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:46:12 compute-0 ceph-mon[75880]: pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:14 compute-0 ceph-mon[75880]: pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:15 compute-0 podman[215220]: 2025-12-01 20:46:15.11202244 +0000 UTC m=+0.071538775 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 01 20:46:16 compute-0 podman[215239]: 2025-12-01 20:46:16.124691757 +0000 UTC m=+0.082300201 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:46:16 compute-0 ceph-mon[75880]: pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:17 compute-0 sudo[215216]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:17 compute-0 sudo[215414]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulgtlyngfmhuwrffoisikfhdlbmabdlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621977.2370539-59-5970589165972/AnsiballZ_stat.py'
Dec 01 20:46:17 compute-0 sudo[215414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:17 compute-0 python3.9[215416]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:46:17 compute-0 sudo[215414]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:18 compute-0 ceph-mon[75880]: pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:18 compute-0 sudo[215566]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnhylhhliutzuhihlkmifsbqkgylofvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621978.0227592-69-130515475779155/AnsiballZ_command.py'
Dec 01 20:46:18 compute-0 sudo[215566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:18 compute-0 python3.9[215568]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:46:18 compute-0 sudo[215566]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:19 compute-0 sudo[215719]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnoiinutqpkxjkhhctrorvpxlxpervur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621978.9209824-79-146905543878502/AnsiballZ_stat.py'
Dec 01 20:46:19 compute-0 sudo[215719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:19 compute-0 python3.9[215721]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:46:19 compute-0 sudo[215719]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:19 compute-0 sudo[215871]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tovrmghoctonspfvwqlnsriltmvjxdlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621979.5569272-87-138972780541367/AnsiballZ_command.py'
Dec 01 20:46:19 compute-0 sudo[215871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:20 compute-0 python3.9[215873]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:46:20 compute-0 sudo[215871]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:20 compute-0 ceph-mon[75880]: pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:20 compute-0 sudo[216024]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkuewjezgvgcvpytktufpflwdiqtprlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621980.266133-95-154754200777032/AnsiballZ_stat.py'
Dec 01 20:46:20 compute-0 sudo[216024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:20 compute-0 python3.9[216026]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:20 compute-0 sudo[216024]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:21 compute-0 sudo[216147]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldsshpwzlqgenmhxcoomztrunbubdhfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621980.266133-95-154754200777032/AnsiballZ_copy.py'
Dec 01 20:46:21 compute-0 sudo[216147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:21 compute-0 ceph-mon[75880]: pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:21 compute-0 python3.9[216149]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621980.266133-95-154754200777032/.source.iscsi _original_basename=.dqkz66aa follow=False checksum=8b0d7aa9fdd1896ce4f91f3dbcd56174948b2c2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:21 compute-0 sudo[216147]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:22 compute-0 sudo[216299]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eckoopxfhifjokczlkeilmrfvbshqand ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621981.7791572-110-19215116175980/AnsiballZ_file.py'
Dec 01 20:46:22 compute-0 sudo[216299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:22 compute-0 python3.9[216301]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:22 compute-0 sudo[216299]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:23 compute-0 sudo[216451]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapnumubjcnykgyztywgbpghkebrcdwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621982.614087-118-72251709257523/AnsiballZ_lineinfile.py'
Dec 01 20:46:23 compute-0 sudo[216451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:23 compute-0 python3.9[216453]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:23 compute-0 sudo[216451]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:23 compute-0 ceph-mon[75880]: pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:24 compute-0 sudo[216603]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylmhdxoyzcniuxzroxycdtalwhkinila ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621983.5755665-127-209448135227915/AnsiballZ_systemd_service.py'
Dec 01 20:46:24 compute-0 sudo[216603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:24 compute-0 python3.9[216605]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:46:24 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 01 20:46:24 compute-0 sudo[216603]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:25 compute-0 sudo[216759]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiruubypyhbvbdeniqecfchwpetkghft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621984.8350718-135-251478343118626/AnsiballZ_systemd_service.py'
Dec 01 20:46:25 compute-0 sudo[216759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:25 compute-0 python3.9[216761]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:46:25 compute-0 systemd[1]: Reloading.
Dec 01 20:46:25 compute-0 systemd-rc-local-generator[216792]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:46:25 compute-0 systemd-sysv-generator[216796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:46:25 compute-0 ceph-mon[75880]: pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:25 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 01 20:46:25 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 01 20:46:25 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 01 20:46:25 compute-0 systemd[1]: Started Open-iSCSI.
Dec 01 20:46:25 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 01 20:46:25 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 01 20:46:25 compute-0 sudo[216759]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:26 compute-0 sudo[216961]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arnfpkhftjhdeghypiwdobzcztoxvbtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621986.3022797-146-238520712459373/AnsiballZ_service_facts.py'
Dec 01 20:46:26 compute-0 sudo[216961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:26 compute-0 python3.9[216963]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:46:26 compute-0 network[216980]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:46:26 compute-0 network[216981]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:46:26 compute-0 network[216982]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:46:27 compute-0 ceph-mon[75880]: pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:29 compute-0 ceph-mon[75880]: pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:30 compute-0 sudo[217088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:46:30 compute-0 sudo[217088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:30 compute-0 sudo[217088]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:30 compute-0 sudo[217118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:46:30 compute-0 sudo[217118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:30 compute-0 sudo[216961]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:31 compute-0 sudo[217118]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:46:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:46:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:46:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:46:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:46:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:46:31 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:46:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:46:31 compute-0 sudo[217284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:46:31 compute-0 sudo[217284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:31 compute-0 sudo[217284]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:31 compute-0 sudo[217333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:46:31 compute-0 sudo[217333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:31 compute-0 sudo[217381]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxinioynldmlncnirhehfztfkbkixynj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621991.1346245-156-191861097124240/AnsiballZ_file.py'
Dec 01 20:46:31 compute-0 sudo[217381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:31 compute-0 python3.9[217385]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 01 20:46:31 compute-0 sudo[217381]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:31 compute-0 podman[217398]: 2025-12-01 20:46:31.690226681 +0000 UTC m=+0.051873106 container create 8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldberg, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:46:31 compute-0 systemd[1]: Started libpod-conmon-8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7.scope.
Dec 01 20:46:31 compute-0 podman[217398]: 2025-12-01 20:46:31.669636738 +0000 UTC m=+0.031283203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:46:31 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:46:31 compute-0 podman[217398]: 2025-12-01 20:46:31.780619396 +0000 UTC m=+0.142265841 container init 8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:46:31 compute-0 podman[217398]: 2025-12-01 20:46:31.788166676 +0000 UTC m=+0.149813101 container start 8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:46:31 compute-0 podman[217398]: 2025-12-01 20:46:31.791287655 +0000 UTC m=+0.152934100 container attach 8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:46:31 compute-0 beautiful_goldberg[217437]: 167 167
Dec 01 20:46:31 compute-0 systemd[1]: libpod-8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7.scope: Deactivated successfully.
Dec 01 20:46:31 compute-0 podman[217398]: 2025-12-01 20:46:31.795640763 +0000 UTC m=+0.157287198 container died 8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldberg, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:46:31 compute-0 ceph-mon[75880]: pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:46:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:46:31 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:46:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b74695e5bc29595c9e9f4fc8d4bed97f6cba1cd4ef79e770dda3ef6b3b380c54-merged.mount: Deactivated successfully.
Dec 01 20:46:31 compute-0 podman[217398]: 2025-12-01 20:46:31.843712237 +0000 UTC m=+0.205358662 container remove 8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:46:31 compute-0 systemd[1]: libpod-conmon-8d84faecd8c5cca577ce57cc856fe3cf6eb7e37844cb415f5eee4a1eb2c007c7.scope: Deactivated successfully.
Dec 01 20:46:32 compute-0 podman[217512]: 2025-12-01 20:46:32.04128643 +0000 UTC m=+0.058938500 container create 99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_sinoussi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:46:32 compute-0 systemd[1]: Started libpod-conmon-99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9.scope.
Dec 01 20:46:32 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a937540d2751ca2ff55dace81131d926096e5efb85e963927655b4d85fe172/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a937540d2751ca2ff55dace81131d926096e5efb85e963927655b4d85fe172/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a937540d2751ca2ff55dace81131d926096e5efb85e963927655b4d85fe172/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a937540d2751ca2ff55dace81131d926096e5efb85e963927655b4d85fe172/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a937540d2751ca2ff55dace81131d926096e5efb85e963927655b4d85fe172/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:32 compute-0 podman[217512]: 2025-12-01 20:46:32.014647536 +0000 UTC m=+0.032299706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:46:32 compute-0 podman[217512]: 2025-12-01 20:46:32.111154255 +0000 UTC m=+0.128806345 container init 99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_sinoussi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:46:32 compute-0 podman[217512]: 2025-12-01 20:46:32.121303527 +0000 UTC m=+0.138955597 container start 99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:46:32 compute-0 podman[217512]: 2025-12-01 20:46:32.124640313 +0000 UTC m=+0.142292383 container attach 99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:46:32 compute-0 sudo[217606]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxbkatwrqzpcuggdkswumtlkwftlzvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621991.7858589-164-113700914132987/AnsiballZ_modprobe.py'
Dec 01 20:46:32 compute-0 sudo[217606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:32 compute-0 python3.9[217608]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 01 20:46:32 compute-0 sudo[217606]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:46:32
Dec 01 20:46:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:46:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:46:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms']
Dec 01 20:46:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:46:32 compute-0 admiring_sinoussi[217551]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:46:32 compute-0 admiring_sinoussi[217551]: --> All data devices are unavailable
Dec 01 20:46:32 compute-0 systemd[1]: libpod-99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9.scope: Deactivated successfully.
Dec 01 20:46:32 compute-0 podman[217512]: 2025-12-01 20:46:32.596803161 +0000 UTC m=+0.614455241 container died 99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 01 20:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a937540d2751ca2ff55dace81131d926096e5efb85e963927655b4d85fe172-merged.mount: Deactivated successfully.
Dec 01 20:46:32 compute-0 podman[217512]: 2025-12-01 20:46:32.635747206 +0000 UTC m=+0.653399276 container remove 99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:46:32 compute-0 systemd[1]: libpod-conmon-99e5f5612428559c154789812549805dd3adb5f681d98078324473108d8a21b9.scope: Deactivated successfully.
Dec 01 20:46:32 compute-0 sudo[217333]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:32 compute-0 sudo[217689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:46:32 compute-0 sudo[217689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:32 compute-0 sudo[217689]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:32 compute-0 sudo[217742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:46:32 compute-0 sudo[217742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:32 compute-0 sudo[217840]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxwptscngpavxufdkgyyujeichuwatw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621992.671999-172-161811190588057/AnsiballZ_stat.py'
Dec 01 20:46:32 compute-0 sudo[217840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:33 compute-0 podman[217855]: 2025-12-01 20:46:33.076361185 +0000 UTC m=+0.039366459 container create d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:46:33 compute-0 systemd[1]: Started libpod-conmon-d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23.scope.
Dec 01 20:46:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:46:33 compute-0 podman[217855]: 2025-12-01 20:46:33.058227919 +0000 UTC m=+0.021233213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:46:33 compute-0 python3.9[217844]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:33 compute-0 podman[217855]: 2025-12-01 20:46:33.160874293 +0000 UTC m=+0.123879587 container init d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:46:33 compute-0 podman[217855]: 2025-12-01 20:46:33.169234968 +0000 UTC m=+0.132240242 container start d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:46:33 compute-0 sudo[217840]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:33 compute-0 podman[217855]: 2025-12-01 20:46:33.172535523 +0000 UTC m=+0.135540797 container attach d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:46:33 compute-0 infallible_mirzakhani[217871]: 167 167
Dec 01 20:46:33 compute-0 systemd[1]: libpod-d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23.scope: Deactivated successfully.
Dec 01 20:46:33 compute-0 podman[217855]: 2025-12-01 20:46:33.175784096 +0000 UTC m=+0.138789380 container died d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:46:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0450e17cf058320fa713e99e29048cfe86ae30db0a12bc8241b38ef6e7b571a1-merged.mount: Deactivated successfully.
Dec 01 20:46:33 compute-0 podman[217855]: 2025-12-01 20:46:33.211947182 +0000 UTC m=+0.174952466 container remove d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 01 20:46:33 compute-0 systemd[1]: libpod-conmon-d67160669da380a8aeda5afb72b79793862b39e8497a194ccf35d1bf914eff23.scope: Deactivated successfully.
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:46:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:46:33 compute-0 podman[217942]: 2025-12-01 20:46:33.384622157 +0000 UTC m=+0.041297211 container create adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 01 20:46:33 compute-0 systemd[1]: Started libpod-conmon-adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026.scope.
Dec 01 20:46:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b206cad90f830cd2c0c9560265697f117aaa5a4213ed3c95e652bc44e7d04872/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b206cad90f830cd2c0c9560265697f117aaa5a4213ed3c95e652bc44e7d04872/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b206cad90f830cd2c0c9560265697f117aaa5a4213ed3c95e652bc44e7d04872/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b206cad90f830cd2c0c9560265697f117aaa5a4213ed3c95e652bc44e7d04872/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:33 compute-0 podman[217942]: 2025-12-01 20:46:33.457110615 +0000 UTC m=+0.113785689 container init adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:46:33 compute-0 podman[217942]: 2025-12-01 20:46:33.366878354 +0000 UTC m=+0.023553428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:46:33 compute-0 podman[217942]: 2025-12-01 20:46:33.465535012 +0000 UTC m=+0.122210056 container start adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:46:33 compute-0 podman[217942]: 2025-12-01 20:46:33.468754993 +0000 UTC m=+0.125430037 container attach adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 20:46:33 compute-0 sudo[218037]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxvfxfjkyrfvwoewgmoyauyybfmmyeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621992.671999-172-161811190588057/AnsiballZ_copy.py'
Dec 01 20:46:33 compute-0 sudo[218037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:33 compute-0 python3.9[218039]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621992.671999-172-161811190588057/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:33 compute-0 sudo[218037]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]: {
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:     "0": [
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:         {
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "devices": [
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "/dev/loop3"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             ],
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_name": "ceph_lv0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_size": "21470642176",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "name": "ceph_lv0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "tags": {
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cluster_name": "ceph",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.crush_device_class": "",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.encrypted": "0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.objectstore": "bluestore",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osd_id": "0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.type": "block",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.vdo": "0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.with_tpm": "0"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             },
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "type": "block",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "vg_name": "ceph_vg0"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:         }
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:     ],
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:     "1": [
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:         {
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "devices": [
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "/dev/loop4"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             ],
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_name": "ceph_lv1",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_size": "21470642176",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "name": "ceph_lv1",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "tags": {
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cluster_name": "ceph",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.crush_device_class": "",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.encrypted": "0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.objectstore": "bluestore",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osd_id": "1",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.type": "block",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.vdo": "0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.with_tpm": "0"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             },
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "type": "block",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "vg_name": "ceph_vg1"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:         }
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:     ],
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:     "2": [
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:         {
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "devices": [
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "/dev/loop5"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             ],
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_name": "ceph_lv2",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_size": "21470642176",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "name": "ceph_lv2",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "tags": {
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.cluster_name": "ceph",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.crush_device_class": "",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.encrypted": "0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.objectstore": "bluestore",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osd_id": "2",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.type": "block",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.vdo": "0",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:                 "ceph.with_tpm": "0"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             },
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "type": "block",
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:             "vg_name": "ceph_vg2"
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:         }
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]:     ]
Dec 01 20:46:33 compute-0 festive_mcclintock[217988]: }
Dec 01 20:46:33 compute-0 systemd[1]: libpod-adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026.scope: Deactivated successfully.
Dec 01 20:46:33 compute-0 podman[217942]: 2025-12-01 20:46:33.806616555 +0000 UTC m=+0.463291639 container died adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:46:33 compute-0 ceph-mon[75880]: pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b206cad90f830cd2c0c9560265697f117aaa5a4213ed3c95e652bc44e7d04872-merged.mount: Deactivated successfully.
Dec 01 20:46:33 compute-0 podman[217942]: 2025-12-01 20:46:33.853896084 +0000 UTC m=+0.510571148 container remove adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:46:33 compute-0 systemd[1]: libpod-conmon-adf60ab7ef08fb3fcd83ad7a22a62b7bb2134e5383505d3fb2ef4952df755026.scope: Deactivated successfully.
Dec 01 20:46:33 compute-0 sudo[217742]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:33 compute-0 sudo[218080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:46:33 compute-0 sudo[218080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:33 compute-0 sudo[218080]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:34 compute-0 sudo[218130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:46:34 compute-0 sudo[218130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:34 compute-0 sudo[218255]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhetreduggorwotlgcpiisdtekqmwzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621993.9816835-188-241097860141696/AnsiballZ_lineinfile.py'
Dec 01 20:46:34 compute-0 sudo[218255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:34 compute-0 podman[218271]: 2025-12-01 20:46:34.320984731 +0000 UTC m=+0.040819914 container create b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:46:34 compute-0 systemd[1]: Started libpod-conmon-b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8.scope.
Dec 01 20:46:34 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:46:34 compute-0 podman[218271]: 2025-12-01 20:46:34.302346071 +0000 UTC m=+0.022181274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:46:34 compute-0 podman[218271]: 2025-12-01 20:46:34.405710947 +0000 UTC m=+0.125546221 container init b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:46:34 compute-0 podman[218271]: 2025-12-01 20:46:34.412544064 +0000 UTC m=+0.132379247 container start b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:46:34 compute-0 podman[218271]: 2025-12-01 20:46:34.416631904 +0000 UTC m=+0.136467127 container attach b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:46:34 compute-0 jovial_tharp[218287]: 167 167
Dec 01 20:46:34 compute-0 systemd[1]: libpod-b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8.scope: Deactivated successfully.
Dec 01 20:46:34 compute-0 podman[218271]: 2025-12-01 20:46:34.419631799 +0000 UTC m=+0.139466992 container died b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Dec 01 20:46:34 compute-0 python3.9[218259]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-dace9575cb799434b3f969eebb999d82aa93e8e69ef6f797eb2bc7a214ab9ebf-merged.mount: Deactivated successfully.
Dec 01 20:46:34 compute-0 podman[218271]: 2025-12-01 20:46:34.456330082 +0000 UTC m=+0.176165255 container remove b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_tharp, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:46:34 compute-0 sudo[218255]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:34 compute-0 systemd[1]: libpod-conmon-b157f3cb036cde7a54e8a4cfac055dee1373d4425af02544ce8b897d0ac011f8.scope: Deactivated successfully.
Dec 01 20:46:34 compute-0 podman[218358]: 2025-12-01 20:46:34.667277259 +0000 UTC m=+0.038169611 container create 89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:46:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:34 compute-0 systemd[1]: Started libpod-conmon-89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c.scope.
Dec 01 20:46:34 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f9307534d8297f3e6dadfe822756159eae78300328262439228f3fb14697c79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f9307534d8297f3e6dadfe822756159eae78300328262439228f3fb14697c79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f9307534d8297f3e6dadfe822756159eae78300328262439228f3fb14697c79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f9307534d8297f3e6dadfe822756159eae78300328262439228f3fb14697c79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:46:34 compute-0 podman[218358]: 2025-12-01 20:46:34.743391732 +0000 UTC m=+0.114284114 container init 89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:46:34 compute-0 podman[218358]: 2025-12-01 20:46:34.650544239 +0000 UTC m=+0.021436611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:46:34 compute-0 podman[218358]: 2025-12-01 20:46:34.754495784 +0000 UTC m=+0.125388126 container start 89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:46:34 compute-0 podman[218358]: 2025-12-01 20:46:34.757616313 +0000 UTC m=+0.128508685 container attach 89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:46:35 compute-0 sudo[218532]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sikjjqioyuzskorzlzakjcqjwsynowim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621994.6139333-196-29617096705855/AnsiballZ_systemd.py'
Dec 01 20:46:35 compute-0 sudo[218532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:35 compute-0 lvm[218554]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:46:35 compute-0 lvm[218554]: VG ceph_vg0 finished
Dec 01 20:46:35 compute-0 lvm[218557]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:46:35 compute-0 lvm[218557]: VG ceph_vg1 finished
Dec 01 20:46:35 compute-0 lvm[218559]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:46:35 compute-0 lvm[218559]: VG ceph_vg2 finished
Dec 01 20:46:35 compute-0 jolly_aryabhata[218403]: {}
Dec 01 20:46:35 compute-0 python3.9[218540]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:46:35 compute-0 systemd[1]: libpod-89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c.scope: Deactivated successfully.
Dec 01 20:46:35 compute-0 podman[218358]: 2025-12-01 20:46:35.537518788 +0000 UTC m=+0.908411140 container died 89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:46:35 compute-0 systemd[1]: libpod-89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c.scope: Consumed 1.277s CPU time.
Dec 01 20:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f9307534d8297f3e6dadfe822756159eae78300328262439228f3fb14697c79-merged.mount: Deactivated successfully.
Dec 01 20:46:35 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 01 20:46:35 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 01 20:46:35 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 01 20:46:35 compute-0 podman[218358]: 2025-12-01 20:46:35.581244544 +0000 UTC m=+0.952136896 container remove 89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:46:35 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 01 20:46:35 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 01 20:46:35 compute-0 systemd[1]: libpod-conmon-89592ef970c928a8bf385510f332281cf5c413e78b8a3c006b1830bc28ee768c.scope: Deactivated successfully.
Dec 01 20:46:35 compute-0 sudo[218532]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:35 compute-0 sudo[218130]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:46:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:46:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:46:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:46:35 compute-0 sudo[218580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:46:35 compute-0 sudo[218580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:46:35 compute-0 sudo[218580]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:35 compute-0 ceph-mon[75880]: pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:46:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:46:36 compute-0 sudo[218753]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uemrrxxcambtbscbarcgsoxlbssmneza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621995.8140824-204-278745851771508/AnsiballZ_file.py'
Dec 01 20:46:36 compute-0 sudo[218753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:36 compute-0 python3.9[218755]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:46:36 compute-0 sudo[218753]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:36 compute-0 sudo[218905]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxxvouyxotsimltbcuphvpoagdytqhlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621996.4979923-213-181689994602730/AnsiballZ_stat.py'
Dec 01 20:46:36 compute-0 sudo[218905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:37 compute-0 python3.9[218907]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:46:37 compute-0 sudo[218905]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:37 compute-0 sudo[219057]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbjnyqpqmeysvkayhgborcpfbbvcivba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621997.2177663-222-58580912040820/AnsiballZ_stat.py'
Dec 01 20:46:37 compute-0 sudo[219057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:37 compute-0 python3.9[219059]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:46:37 compute-0 sudo[219057]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:37 compute-0 ceph-mon[75880]: pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:38 compute-0 sudo[219209]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkeevzztklvnpdsqkszuertixbfvfcis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621997.8326516-230-193714091846355/AnsiballZ_stat.py'
Dec 01 20:46:38 compute-0 sudo[219209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:38 compute-0 python3.9[219211]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:38 compute-0 sudo[219209]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:38 compute-0 sudo[219332]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnibaauigkbiiupmayxxrbaopmffjgaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621997.8326516-230-193714091846355/AnsiballZ_copy.py'
Dec 01 20:46:38 compute-0 sudo[219332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:38 compute-0 python3.9[219334]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764621997.8326516-230-193714091846355/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:38 compute-0 sudo[219332]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:39 compute-0 sudo[219484]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixsmrteiluwkxawqatttllzyybsatbrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621998.9899445-245-24309153453561/AnsiballZ_command.py'
Dec 01 20:46:39 compute-0 sudo[219484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:39 compute-0 python3.9[219486]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:46:39 compute-0 sudo[219484]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:39 compute-0 sudo[219637]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtbzlblypiyhjxjwmpnjehskklbqojvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764621999.5913897-253-244526439418519/AnsiballZ_lineinfile.py'
Dec 01 20:46:39 compute-0 sudo[219637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:39 compute-0 ceph-mon[75880]: pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:40 compute-0 python3.9[219639]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:40 compute-0 sudo[219637]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:46:40 compute-0 sudo[219789]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfqayydrnvdaehruesvqoytpuybmrhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622000.204784-261-18627817533333/AnsiballZ_replace.py'
Dec 01 20:46:40 compute-0 sudo[219789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:40 compute-0 python3.9[219791]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:40 compute-0 sudo[219789]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:41 compute-0 sudo[219941]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpdmjwyulnbzmltemberopobsboizxtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622001.0004964-269-46338544171280/AnsiballZ_replace.py'
Dec 01 20:46:41 compute-0 sudo[219941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:41 compute-0 python3.9[219943]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:41 compute-0 sudo[219941]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:41 compute-0 sudo[220093]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrikrffjticxvvvxiqcapcglicpzdxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622001.641629-278-279947690701078/AnsiballZ_lineinfile.py'
Dec 01 20:46:41 compute-0 sudo[220093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:41 compute-0 ceph-mon[75880]: pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:42 compute-0 python3.9[220095]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:42 compute-0 sudo[220093]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:42 compute-0 sudo[220245]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwtzuhcxjuhnccpwsevnnmrdyrkatze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622002.1639516-278-69085373946932/AnsiballZ_lineinfile.py'
Dec 01 20:46:42 compute-0 sudo[220245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:42 compute-0 python3.9[220247]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:42 compute-0 sudo[220245]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:42 compute-0 sudo[220397]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqopzddisktxffudcgwgzyupklynqexs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622002.6938422-278-119193375714007/AnsiballZ_lineinfile.py'
Dec 01 20:46:42 compute-0 sudo[220397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:43 compute-0 python3.9[220399]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:43 compute-0 sudo[220397]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:43 compute-0 sudo[220549]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gubqpcnyonwmbahpqpbpbwrwwtwpjbyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622003.2675664-278-111908892572830/AnsiballZ_lineinfile.py'
Dec 01 20:46:43 compute-0 sudo[220549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:43 compute-0 python3.9[220551]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:43 compute-0 sudo[220549]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:43 compute-0 ceph-mon[75880]: pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:44 compute-0 sudo[220701]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgvonppcfpbhavbkddisscvnxzsvynzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622003.928967-307-157772300207261/AnsiballZ_stat.py'
Dec 01 20:46:44 compute-0 sudo[220701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:44 compute-0 python3.9[220703]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:46:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:46:44.345 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:46:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:46:44.347 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:46:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:46:44.347 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:46:44 compute-0 sudo[220701]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:44 compute-0 sudo[220855]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwsyshqpthdqckqxdjiooswyqftaqbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622004.5386703-315-85486237514994/AnsiballZ_file.py'
Dec 01 20:46:44 compute-0 sudo[220855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:44 compute-0 python3.9[220857]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:44 compute-0 sudo[220855]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:45 compute-0 sudo[221019]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwvcfgmmkiaqwhcoouwczebialgdigvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622005.2329803-324-145843088818689/AnsiballZ_file.py'
Dec 01 20:46:45 compute-0 sudo[221019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:45 compute-0 podman[220981]: 2025-12-01 20:46:45.573118288 +0000 UTC m=+0.070758805 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Dec 01 20:46:45 compute-0 python3.9[221021]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:46:45 compute-0 sudo[221019]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:45 compute-0 ceph-mon[75880]: pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:46 compute-0 sudo[221198]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihcmajpkslhseilsllymrhazzyvjdye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622005.955039-332-11417940858410/AnsiballZ_stat.py'
Dec 01 20:46:46 compute-0 sudo[221198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:46 compute-0 podman[221153]: 2025-12-01 20:46:46.235852527 +0000 UTC m=+0.077376923 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 01 20:46:46 compute-0 python3.9[221204]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:46 compute-0 sudo[221198]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:46 compute-0 sudo[221284]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzbkpdqucmxaphzwrqymajnbggfnvru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622005.955039-332-11417940858410/AnsiballZ_file.py'
Dec 01 20:46:46 compute-0 sudo[221284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:46 compute-0 python3.9[221286]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:46:46 compute-0 sudo[221284]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:47 compute-0 sudo[221436]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svjskxlipcrlpvrevuelphkfudaywtng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622006.988729-332-137145277686623/AnsiballZ_stat.py'
Dec 01 20:46:47 compute-0 sudo[221436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:47 compute-0 python3.9[221438]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:47 compute-0 sudo[221436]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:47 compute-0 sudo[221514]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrzqgtixnihgpgzmnnexoubtwdyvzln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622006.988729-332-137145277686623/AnsiballZ_file.py'
Dec 01 20:46:47 compute-0 sudo[221514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:47 compute-0 python3.9[221516]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:46:47 compute-0 sudo[221514]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:47 compute-0 ceph-mon[75880]: pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:48 compute-0 sudo[221666]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykftpenwbwmelitpxdfqtmateiwdlwmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622008.0623944-355-1271055098686/AnsiballZ_file.py'
Dec 01 20:46:48 compute-0 sudo[221666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:48 compute-0 python3.9[221668]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:48 compute-0 sudo[221666]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:49 compute-0 sudo[221818]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtziqklgcyrpztlwweouutetrznjfxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622008.6810308-363-256724030808194/AnsiballZ_stat.py'
Dec 01 20:46:49 compute-0 sudo[221818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:49 compute-0 python3.9[221820]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:49 compute-0 sudo[221818]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:49 compute-0 sudo[221896]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njzxiqvoebvzjticphikbrcqubjlzlol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622008.6810308-363-256724030808194/AnsiballZ_file.py'
Dec 01 20:46:49 compute-0 sudo[221896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:49 compute-0 python3.9[221898]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:49 compute-0 sudo[221896]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:50 compute-0 sudo[222048]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pecfljyhrchnqstagykfspwmtkfronxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622009.8008573-375-9377600418882/AnsiballZ_stat.py'
Dec 01 20:46:50 compute-0 sudo[222048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:50 compute-0 ceph-mon[75880]: pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:50 compute-0 python3.9[222050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:50 compute-0 sudo[222048]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:50 compute-0 sudo[222126]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhnijlbsbzkhuxypuguceiqegyxxjkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622009.8008573-375-9377600418882/AnsiballZ_file.py'
Dec 01 20:46:50 compute-0 sudo[222126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:50 compute-0 python3.9[222128]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:50 compute-0 sudo[222126]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:51 compute-0 sudo[222278]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzbrqdwkkyhaersmdtxnhrwmngkxwvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622010.8784618-387-145742921528096/AnsiballZ_systemd.py'
Dec 01 20:46:51 compute-0 sudo[222278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:51 compute-0 python3.9[222280]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:46:51 compute-0 systemd[1]: Reloading.
Dec 01 20:46:51 compute-0 systemd-sysv-generator[222310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:46:51 compute-0 systemd-rc-local-generator[222305]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:46:52 compute-0 sudo[222278]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:52 compute-0 ceph-mon[75880]: pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:52 compute-0 sudo[222466]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vymdqcixsbpmjvcjaamfozvideoezjsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622012.1723864-395-209607380121783/AnsiballZ_stat.py'
Dec 01 20:46:52 compute-0 sudo[222466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:52 compute-0 python3.9[222468]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:52 compute-0 sudo[222466]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:52 compute-0 sudo[222544]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzzizwluzyokidpeokhbagaoisjdzpmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622012.1723864-395-209607380121783/AnsiballZ_file.py'
Dec 01 20:46:52 compute-0 sudo[222544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:53 compute-0 python3.9[222546]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:53 compute-0 sudo[222544]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:53 compute-0 sudo[222696]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxdyavrxmppovivrhoqaptfvasrtynnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622013.2386749-407-128920923465670/AnsiballZ_stat.py'
Dec 01 20:46:53 compute-0 sudo[222696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:53 compute-0 python3.9[222698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:53 compute-0 sudo[222696]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:53 compute-0 sudo[222774]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qblqrkjndsnbwkpakgjwbioqcxtuwfbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622013.2386749-407-128920923465670/AnsiballZ_file.py'
Dec 01 20:46:53 compute-0 sudo[222774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:54 compute-0 python3.9[222776]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:54 compute-0 ceph-mon[75880]: pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:54 compute-0 sudo[222774]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:54 compute-0 sudo[222926]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jixquuiruszwsnooyvxqmhlbzmtnlbyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622014.3273408-419-214141142614456/AnsiballZ_systemd.py'
Dec 01 20:46:54 compute-0 sudo[222926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:54 compute-0 python3.9[222928]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:46:54 compute-0 systemd[1]: Reloading.
Dec 01 20:46:55 compute-0 systemd-sysv-generator[222959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:46:55 compute-0 systemd-rc-local-generator[222956]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:46:55 compute-0 systemd[1]: Starting Create netns directory...
Dec 01 20:46:55 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 01 20:46:55 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 01 20:46:55 compute-0 systemd[1]: Finished Create netns directory.
Dec 01 20:46:55 compute-0 sudo[222926]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:56 compute-0 sudo[223118]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyvyzkhmscffuooislcgvgsgpepcrhqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622015.7023902-429-138399321483506/AnsiballZ_file.py'
Dec 01 20:46:56 compute-0 ceph-mon[75880]: pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:56 compute-0 sudo[223118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:56 compute-0 python3.9[223120]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:46:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:56 compute-0 sudo[223118]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:57 compute-0 sudo[223270]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihfldyczeydzgyitjykvmkuqjrjthcbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622016.8583968-437-1389897288639/AnsiballZ_stat.py'
Dec 01 20:46:57 compute-0 sudo[223270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:57 compute-0 python3.9[223272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:57 compute-0 sudo[223270]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:57 compute-0 ceph-mon[75880]: pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:57 compute-0 sudo[223393]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eynpvoyzrhiohvzdlialmzysigqtcwoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622016.8583968-437-1389897288639/AnsiballZ_copy.py'
Dec 01 20:46:57 compute-0 sudo[223393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:57 compute-0 python3.9[223395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622016.8583968-437-1389897288639/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:46:57 compute-0 sudo[223393]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:46:58 compute-0 sudo[223545]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfmplgodcpzgrfjdgidtrwlybmhdqiav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622018.172892-454-252404569276091/AnsiballZ_file.py'
Dec 01 20:46:58 compute-0 sudo[223545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:58 compute-0 python3.9[223547]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:46:58 compute-0 sudo[223545]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:46:59 compute-0 sudo[223697]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npppwhblcbrzpmkwctjeimvcpytfioqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622018.7917824-462-242276007211615/AnsiballZ_stat.py'
Dec 01 20:46:59 compute-0 sudo[223697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:59 compute-0 python3.9[223699]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:46:59 compute-0 sudo[223697]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:59 compute-0 sudo[223820]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnhqogpmzxcxxzaigablmoeobeclpcmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622018.7917824-462-242276007211615/AnsiballZ_copy.py'
Dec 01 20:46:59 compute-0 sudo[223820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:46:59 compute-0 python3.9[223822]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764622018.7917824-462-242276007211615/.source.json _original_basename=.h5ch82f2 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:46:59 compute-0 sudo[223820]: pam_unix(sudo:session): session closed for user root
Dec 01 20:46:59 compute-0 ceph-mon[75880]: pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:00 compute-0 sudo[223972]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyntniombminacaimortlitgmyybtxtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622019.8832383-477-26130313665364/AnsiballZ_file.py'
Dec 01 20:47:00 compute-0 sudo[223972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:00 compute-0 python3.9[223974]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:00 compute-0 sudo[223972]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:00 compute-0 sudo[224124]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-negjztdooyqdqbarqblkyscakyglgves ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622020.5608532-485-62946798289300/AnsiballZ_stat.py'
Dec 01 20:47:00 compute-0 sudo[224124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:00 compute-0 sudo[224124]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:01 compute-0 sudo[224247]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzgektzzkwvhujlaxugnzhjfymtntli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622020.5608532-485-62946798289300/AnsiballZ_copy.py'
Dec 01 20:47:01 compute-0 sudo[224247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:01 compute-0 sudo[224247]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:01 compute-0 ceph-mon[75880]: pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:02 compute-0 sudo[224399]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lorxdzyvfbxkfxmzfaeeexjaoobqdmbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622021.7398636-502-75569545511927/AnsiballZ_container_config_data.py'
Dec 01 20:47:02 compute-0 sudo[224399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:02 compute-0 python3.9[224401]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 01 20:47:02 compute-0 sudo[224399]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.777210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622022777301, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 2396437, "memory_usage": 2442616, "flush_reason": "Manual Compaction"}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622022797936, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2324729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9081, "largest_seqno": 11124, "table_properties": {"data_size": 2315440, "index_size": 5911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17882, "raw_average_key_size": 19, "raw_value_size": 2296962, "raw_average_value_size": 2502, "num_data_blocks": 271, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621788, "oldest_key_time": 1764621788, "file_creation_time": 1764622022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 20772 microseconds, and 5249 cpu microseconds.
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.797988) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2324729 bytes OK
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.798008) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.799331) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.799344) EVENT_LOG_v1 {"time_micros": 1764622022799340, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.799372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2387902, prev total WAL file size 2387902, number of live WAL files 2.
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.800232) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2270KB)], [26(4724KB)]
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622022800271, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 7162669, "oldest_snapshot_seqno": -1}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3226 keys, 6032622 bytes, temperature: kUnknown
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622022883892, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 6032622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6006428, "index_size": 16996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 74434, "raw_average_key_size": 23, "raw_value_size": 5944110, "raw_average_value_size": 1842, "num_data_blocks": 749, "num_entries": 3226, "num_filter_entries": 3226, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:47:02 compute-0 sudo[224551]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjknkwcjyeptzlxbnzicigfsbubgwpse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622022.5016775-511-14854231712339/AnsiballZ_container_config_hash.py'
Dec 01 20:47:02 compute-0 sudo[224551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.912749) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 6032622 bytes
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.944897) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.4 rd, 72.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 3740, records dropped: 514 output_compression: NoCompression
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.944948) EVENT_LOG_v1 {"time_micros": 1764622022944929, "job": 10, "event": "compaction_finished", "compaction_time_micros": 83834, "compaction_time_cpu_micros": 20228, "output_level": 6, "num_output_files": 1, "total_output_size": 6032622, "num_input_records": 3740, "num_output_records": 3226, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622022946673, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622022948492, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.800114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.948705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.948711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.948713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.948715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:47:02 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:47:02.948716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:47:03 compute-0 python3.9[224553]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 01 20:47:03 compute-0 sudo[224551]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:47:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:47:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:47:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:47:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:47:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:47:03 compute-0 sudo[224703]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrtljfkhanpalcrlctdieowtdcytwgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622023.3309243-520-275643983453777/AnsiballZ_podman_container_info.py'
Dec 01 20:47:03 compute-0 sudo[224703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:03 compute-0 ceph-mon[75880]: pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:03 compute-0 python3.9[224705]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 01 20:47:04 compute-0 sudo[224703]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:05 compute-0 sudo[224881]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrhwbzfjecccjhqgrdgtwnnshdzfeksu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764622024.66739-533-27420692877992/AnsiballZ_edpm_container_manage.py'
Dec 01 20:47:05 compute-0 sudo[224881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:05 compute-0 python3[224883]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 01 20:47:05 compute-0 ceph-mon[75880]: pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:06 compute-0 podman[224896]: 2025-12-01 20:47:06.660107108 +0000 UTC m=+1.229257970 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 01 20:47:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:06 compute-0 podman[224954]: 2025-12-01 20:47:06.79738227 +0000 UTC m=+0.040677731 container create f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd)
Dec 01 20:47:06 compute-0 podman[224954]: 2025-12-01 20:47:06.776512748 +0000 UTC m=+0.019808239 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 01 20:47:06 compute-0 python3[224883]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 01 20:47:06 compute-0 sudo[224881]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:07 compute-0 sudo[225142]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmpimxfzksziscynipbvheumygxxmej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622027.0713253-541-24420976140754/AnsiballZ_stat.py'
Dec 01 20:47:07 compute-0 sudo[225142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:07 compute-0 python3.9[225144]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:47:07 compute-0 sudo[225142]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:07 compute-0 sudo[225296]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ameritdpaobknyblqqmoahqhbnqcqpag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622027.7304392-550-260445884020054/AnsiballZ_file.py'
Dec 01 20:47:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:07 compute-0 sudo[225296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:07 compute-0 ceph-mon[75880]: pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:08 compute-0 python3.9[225298]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:08 compute-0 sudo[225296]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:08 compute-0 sudo[225372]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luvyjtcejoirztyanptsfnkfgipxmhvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622027.7304392-550-260445884020054/AnsiballZ_stat.py'
Dec 01 20:47:08 compute-0 sudo[225372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:08 compute-0 python3.9[225374]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:47:08 compute-0 sudo[225372]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:09 compute-0 sudo[225523]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pewojpbtbqeasnilualynuathjjsuzsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622028.6419346-550-123180661497830/AnsiballZ_copy.py'
Dec 01 20:47:09 compute-0 sudo[225523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:09 compute-0 python3.9[225525]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764622028.6419346-550-123180661497830/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:09 compute-0 sudo[225523]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:09 compute-0 sudo[225599]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfqlahnueqoysyfboxwrwkusjdmiadxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622028.6419346-550-123180661497830/AnsiballZ_systemd.py'
Dec 01 20:47:09 compute-0 sudo[225599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:09 compute-0 python3.9[225601]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:47:09 compute-0 systemd[1]: Reloading.
Dec 01 20:47:09 compute-0 systemd-rc-local-generator[225625]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:47:09 compute-0 systemd-sysv-generator[225630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:47:10 compute-0 ceph-mon[75880]: pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:10 compute-0 sudo[225599]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:10 compute-0 sudo[225709]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wckkipfwrhbozvbnugxebrcaceowocxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622028.6419346-550-123180661497830/AnsiballZ_systemd.py'
Dec 01 20:47:10 compute-0 sudo[225709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:10 compute-0 python3.9[225711]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:10 compute-0 systemd[1]: Reloading.
Dec 01 20:47:10 compute-0 systemd-sysv-generator[225744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:47:10 compute-0 systemd-rc-local-generator[225739]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:47:11 compute-0 systemd[1]: Starting multipathd container...
Dec 01 20:47:11 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db1f97ee38038465b423ba6897aba33b05ce615f6cbd7a4c640f403a19f479/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db1f97ee38038465b423ba6897aba33b05ce615f6cbd7a4c640f403a19f479/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc.
Dec 01 20:47:11 compute-0 podman[225751]: 2025-12-01 20:47:11.389785098 +0000 UTC m=+0.163666679 container init f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:47:11 compute-0 multipathd[225766]: + sudo -E kolla_set_configs
Dec 01 20:47:11 compute-0 podman[225751]: 2025-12-01 20:47:11.434076972 +0000 UTC m=+0.207958543 container start f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 01 20:47:11 compute-0 podman[225751]: multipathd
Dec 01 20:47:11 compute-0 systemd[1]: Started multipathd container.
Dec 01 20:47:11 compute-0 sudo[225773]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 01 20:47:11 compute-0 sudo[225773]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 01 20:47:11 compute-0 sudo[225773]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 01 20:47:11 compute-0 sudo[225709]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:11 compute-0 multipathd[225766]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 01 20:47:11 compute-0 multipathd[225766]: INFO:__main__:Validating config file
Dec 01 20:47:11 compute-0 multipathd[225766]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 01 20:47:11 compute-0 multipathd[225766]: INFO:__main__:Writing out command to execute
Dec 01 20:47:11 compute-0 sudo[225773]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:11 compute-0 multipathd[225766]: ++ cat /run_command
Dec 01 20:47:11 compute-0 multipathd[225766]: + CMD='/usr/sbin/multipathd -d'
Dec 01 20:47:11 compute-0 multipathd[225766]: + ARGS=
Dec 01 20:47:11 compute-0 multipathd[225766]: + sudo kolla_copy_cacerts
Dec 01 20:47:11 compute-0 podman[225772]: 2025-12-01 20:47:11.546400403 +0000 UTC m=+0.099483044 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 01 20:47:11 compute-0 sudo[225814]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 01 20:47:11 compute-0 systemd[1]: f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc-70130432c3cc0bf4.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 20:47:11 compute-0 sudo[225814]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 01 20:47:11 compute-0 systemd[1]: f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc-70130432c3cc0bf4.service: Failed with result 'exit-code'.
Dec 01 20:47:11 compute-0 sudo[225814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 01 20:47:11 compute-0 sudo[225814]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:11 compute-0 multipathd[225766]: + [[ ! -n '' ]]
Dec 01 20:47:11 compute-0 multipathd[225766]: + . kolla_extend_start
Dec 01 20:47:11 compute-0 multipathd[225766]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 01 20:47:11 compute-0 multipathd[225766]: Running command: '/usr/sbin/multipathd -d'
Dec 01 20:47:11 compute-0 multipathd[225766]: + umask 0022
Dec 01 20:47:11 compute-0 multipathd[225766]: + exec /usr/sbin/multipathd -d
Dec 01 20:47:11 compute-0 multipathd[225766]: 3317.312917 | --------start up--------
Dec 01 20:47:11 compute-0 multipathd[225766]: 3317.312945 | read /etc/multipath.conf
Dec 01 20:47:11 compute-0 multipathd[225766]: 3317.321470 | path checkers start up
Dec 01 20:47:12 compute-0 python3.9[225955]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:47:12 compute-0 ceph-mon[75880]: pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:12 compute-0 sudo[226107]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggfljwkmdbmyknpcsfboogbrjinsyzta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622032.2559586-586-278730319007937/AnsiballZ_command.py'
Dec 01 20:47:12 compute-0 sudo[226107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:12 compute-0 python3.9[226109]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:12 compute-0 sudo[226107]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:13 compute-0 sudo[226272]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwymaerexbjoogwvulnvpfmzyxibrhgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622032.9306576-594-153688302135086/AnsiballZ_systemd.py'
Dec 01 20:47:13 compute-0 sudo[226272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:13 compute-0 python3.9[226274]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:47:13 compute-0 systemd[1]: Stopping multipathd container...
Dec 01 20:47:13 compute-0 multipathd[225766]: 3319.255986 | exit (signal)
Dec 01 20:47:13 compute-0 multipathd[225766]: 3319.256572 | --------shut down-------
Dec 01 20:47:13 compute-0 systemd[1]: libpod-f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc.scope: Deactivated successfully.
Dec 01 20:47:13 compute-0 podman[226278]: 2025-12-01 20:47:13.564148941 +0000 UTC m=+0.065270611 container died f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:47:13 compute-0 systemd[1]: f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc-70130432c3cc0bf4.timer: Deactivated successfully.
Dec 01 20:47:13 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc.
Dec 01 20:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc-userdata-shm.mount: Deactivated successfully.
Dec 01 20:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-37db1f97ee38038465b423ba6897aba33b05ce615f6cbd7a4c640f403a19f479-merged.mount: Deactivated successfully.
Dec 01 20:47:13 compute-0 podman[226278]: 2025-12-01 20:47:13.659253875 +0000 UTC m=+0.160375575 container cleanup f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 01 20:47:13 compute-0 podman[226278]: multipathd
Dec 01 20:47:13 compute-0 podman[226308]: multipathd
Dec 01 20:47:13 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 01 20:47:13 compute-0 systemd[1]: Stopped multipathd container.
Dec 01 20:47:13 compute-0 systemd[1]: Starting multipathd container...
Dec 01 20:47:13 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db1f97ee38038465b423ba6897aba33b05ce615f6cbd7a4c640f403a19f479/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37db1f97ee38038465b423ba6897aba33b05ce615f6cbd7a4c640f403a19f479/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:13 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc.
Dec 01 20:47:13 compute-0 podman[226320]: 2025-12-01 20:47:13.920303301 +0000 UTC m=+0.140934328 container init f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Dec 01 20:47:13 compute-0 multipathd[226335]: + sudo -E kolla_set_configs
Dec 01 20:47:13 compute-0 sudo[226341]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 01 20:47:13 compute-0 podman[226320]: 2025-12-01 20:47:13.960924749 +0000 UTC m=+0.181555746 container start f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:47:13 compute-0 sudo[226341]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 01 20:47:13 compute-0 sudo[226341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 01 20:47:13 compute-0 podman[226320]: multipathd
Dec 01 20:47:13 compute-0 systemd[1]: Started multipathd container.
Dec 01 20:47:14 compute-0 multipathd[226335]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 01 20:47:14 compute-0 multipathd[226335]: INFO:__main__:Validating config file
Dec 01 20:47:14 compute-0 multipathd[226335]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 01 20:47:14 compute-0 multipathd[226335]: INFO:__main__:Writing out command to execute
Dec 01 20:47:14 compute-0 sudo[226341]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:14 compute-0 multipathd[226335]: ++ cat /run_command
Dec 01 20:47:14 compute-0 multipathd[226335]: + CMD='/usr/sbin/multipathd -d'
Dec 01 20:47:14 compute-0 multipathd[226335]: + ARGS=
Dec 01 20:47:14 compute-0 multipathd[226335]: + sudo kolla_copy_cacerts
Dec 01 20:47:14 compute-0 sudo[226272]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:14 compute-0 sudo[226357]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 01 20:47:14 compute-0 sudo[226357]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 01 20:47:14 compute-0 sudo[226357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 01 20:47:14 compute-0 sudo[226357]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:14 compute-0 multipathd[226335]: + [[ ! -n '' ]]
Dec 01 20:47:14 compute-0 multipathd[226335]: + . kolla_extend_start
Dec 01 20:47:14 compute-0 multipathd[226335]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 01 20:47:14 compute-0 multipathd[226335]: Running command: '/usr/sbin/multipathd -d'
Dec 01 20:47:14 compute-0 multipathd[226335]: + umask 0022
Dec 01 20:47:14 compute-0 multipathd[226335]: + exec /usr/sbin/multipathd -d
Dec 01 20:47:14 compute-0 podman[226342]: 2025-12-01 20:47:14.055956022 +0000 UTC m=+0.075828085 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:47:14 compute-0 multipathd[226335]: 3319.784041 | --------start up--------
Dec 01 20:47:14 compute-0 multipathd[226335]: 3319.784057 | read /etc/multipath.conf
Dec 01 20:47:14 compute-0 systemd[1]: f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc-1c6f9030fb8a15ff.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 20:47:14 compute-0 systemd[1]: f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc-1c6f9030fb8a15ff.service: Failed with result 'exit-code'.
Dec 01 20:47:14 compute-0 multipathd[226335]: 3319.790051 | path checkers start up
Dec 01 20:47:14 compute-0 ceph-mon[75880]: pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:14 compute-0 sudo[226523]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivtrwexzrgqicleryoydhnxdaadvtvqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622034.2259831-602-83524871487374/AnsiballZ_file.py'
Dec 01 20:47:14 compute-0 sudo[226523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:14 compute-0 python3.9[226525]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:14 compute-0 sudo[226523]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:15 compute-0 sudo[226675]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zghjheqienrkttigrrdtgmudkzsvqmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622035.0721576-614-246512578700109/AnsiballZ_file.py'
Dec 01 20:47:15 compute-0 sudo[226675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:15 compute-0 python3.9[226677]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 01 20:47:15 compute-0 sudo[226675]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:15 compute-0 sudo[226842]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbkyhcfexviqnsvgbecylfgqkybhbjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622035.6957211-622-242799849771627/AnsiballZ_modprobe.py'
Dec 01 20:47:15 compute-0 sudo[226842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:15 compute-0 podman[226801]: 2025-12-01 20:47:15.975098252 +0000 UTC m=+0.049624574 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 01 20:47:16 compute-0 python3.9[226848]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 01 20:47:16 compute-0 ceph-mon[75880]: pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:16 compute-0 kernel: Key type psk registered
Dec 01 20:47:16 compute-0 sudo[226842]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:16 compute-0 sudo[227024]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zssxcmbcyxocasakvcelsmslzvviohuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622036.392341-630-148597789430986/AnsiballZ_stat.py'
Dec 01 20:47:16 compute-0 sudo[227024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:16 compute-0 podman[226983]: 2025-12-01 20:47:16.699426326 +0000 UTC m=+0.079061588 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Dec 01 20:47:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:16 compute-0 python3.9[227031]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:47:16 compute-0 sudo[227024]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:17 compute-0 sudo[227158]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhzxzbtjctajaukrphcfbvujbsbxpqis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622036.392341-630-148597789430986/AnsiballZ_copy.py'
Dec 01 20:47:17 compute-0 sudo[227158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:17 compute-0 python3.9[227160]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764622036.392341-630-148597789430986/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:17 compute-0 sudo[227158]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:17 compute-0 sudo[227310]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykjibgqegclfxiolhfsedtwpnrwkfwtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622037.5399418-646-215701419052337/AnsiballZ_lineinfile.py'
Dec 01 20:47:17 compute-0 sudo[227310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:17 compute-0 python3.9[227312]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:17 compute-0 sudo[227310]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:18 compute-0 ceph-mon[75880]: pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:18 compute-0 sudo[227462]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkzlcmphkcfiimxbaebrvuwssvbfzhep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622038.1241739-654-26240382037522/AnsiballZ_systemd.py'
Dec 01 20:47:18 compute-0 sudo[227462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:18 compute-0 python3.9[227464]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:47:18 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 01 20:47:18 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 01 20:47:18 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 01 20:47:18 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 01 20:47:18 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 01 20:47:18 compute-0 sudo[227462]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:19 compute-0 sudo[227618]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pltyyetaqfhtyskczqserwseqpjahskb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622039.1022542-662-220072335719720/AnsiballZ_dnf.py'
Dec 01 20:47:19 compute-0 sudo[227618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:19 compute-0 python3.9[227620]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 01 20:47:20 compute-0 ceph-mon[75880]: pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:21 compute-0 systemd[1]: Reloading.
Dec 01 20:47:21 compute-0 systemd-rc-local-generator[227654]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:47:21 compute-0 systemd-sysv-generator[227657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:47:21 compute-0 systemd[1]: Reloading.
Dec 01 20:47:22 compute-0 systemd-rc-local-generator[227692]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:47:22 compute-0 systemd-sysv-generator[227696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:47:22 compute-0 ceph-mon[75880]: pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:22 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 01 20:47:22 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 01 20:47:22 compute-0 lvm[227735]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:47:22 compute-0 lvm[227735]: VG ceph_vg2 finished
Dec 01 20:47:22 compute-0 lvm[227737]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:47:22 compute-0 lvm[227737]: VG ceph_vg0 finished
Dec 01 20:47:22 compute-0 lvm[227736]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:47:22 compute-0 lvm[227736]: VG ceph_vg1 finished
Dec 01 20:47:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 01 20:47:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 01 20:47:22 compute-0 systemd[1]: Reloading.
Dec 01 20:47:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:22 compute-0 systemd-rc-local-generator[227793]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:47:22 compute-0 systemd-sysv-generator[227796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:47:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 01 20:47:23 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 01 20:47:23 compute-0 sudo[227618]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 01 20:47:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 01 20:47:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.299s CPU time.
Dec 01 20:47:23 compute-0 systemd[1]: run-r39f38b51f0a64c4da0e8242e38cd1309.service: Deactivated successfully.
Dec 01 20:47:24 compute-0 sudo[229081]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyxqknlkhbgomcrhkvomabusyzxseptg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622043.8135912-670-124150873854887/AnsiballZ_systemd_service.py'
Dec 01 20:47:24 compute-0 sudo[229081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:24 compute-0 ceph-mon[75880]: pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:24 compute-0 python3.9[229083]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:47:24 compute-0 iscsid[216801]: iscsid shutting down.
Dec 01 20:47:24 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 01 20:47:24 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 01 20:47:24 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 01 20:47:24 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 01 20:47:24 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 01 20:47:24 compute-0 systemd[1]: Started Open-iSCSI.
Dec 01 20:47:24 compute-0 sudo[229081]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:24 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 01 20:47:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:25 compute-0 python3.9[229238]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 01 20:47:25 compute-0 sudo[229392]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldmsczasqgcfjqpklvlpqrgdvccveljx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622045.596689-688-233766479152870/AnsiballZ_file.py'
Dec 01 20:47:25 compute-0 sudo[229392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:26 compute-0 python3.9[229394]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:26 compute-0 sudo[229392]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:26 compute-0 ceph-mon[75880]: pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:26 compute-0 sudo[229544]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzhkmnwbeaddznzfxpkmglsvtmwwzedc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622046.4337187-699-58740410806401/AnsiballZ_systemd_service.py'
Dec 01 20:47:26 compute-0 sudo[229544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:27 compute-0 python3.9[229546]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:47:27 compute-0 systemd[1]: Reloading.
Dec 01 20:47:27 compute-0 systemd-sysv-generator[229574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:47:27 compute-0 systemd-rc-local-generator[229569]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:47:27 compute-0 sudo[229544]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:28 compute-0 python3.9[229730]: ansible-ansible.builtin.service_facts Invoked
Dec 01 20:47:28 compute-0 network[229747]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 01 20:47:28 compute-0 network[229748]: 'network-scripts' will be removed from distribution in near future.
Dec 01 20:47:28 compute-0 network[229749]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 01 20:47:28 compute-0 ceph-mon[75880]: pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:29 compute-0 ceph-mon[75880]: pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:31 compute-0 ceph-mon[75880]: pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:32 compute-0 sudo[230022]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkznkzgprxbdrvrzvcxgaowtryudrfgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622052.2196035-718-184437401698816/AnsiballZ_systemd_service.py'
Dec 01 20:47:32 compute-0 sudo[230022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:47:32
Dec 01 20:47:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:47:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:47:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.mgr', 'backups', 'images', 'cephfs.cephfs.data', 'volumes']
Dec 01 20:47:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:47:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:32 compute-0 python3.9[230024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:32 compute-0 sudo[230022]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:32 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:33 compute-0 sudo[230175]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqpaugfmtuwxuxtbodsombnxddnipwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622052.896187-718-120413731970876/AnsiballZ_systemd_service.py'
Dec 01 20:47:33 compute-0 sudo[230175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:47:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:47:33 compute-0 python3.9[230177]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:33 compute-0 sudo[230175]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:33 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 01 20:47:33 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 01 20:47:33 compute-0 sudo[230330]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brjazttbkwtiossvulidcfptrzkjevdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622053.5540123-718-24615039104222/AnsiballZ_systemd_service.py'
Dec 01 20:47:33 compute-0 sudo[230330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:33 compute-0 ceph-mon[75880]: pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:34 compute-0 python3.9[230332]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:34 compute-0 sudo[230330]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:34 compute-0 sudo[230483]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxnabgthhikexuegsclybzdjsyamopvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622054.2099507-718-80514687665429/AnsiballZ_systemd_service.py'
Dec 01 20:47:34 compute-0 sudo[230483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:34 compute-0 python3.9[230485]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:34 compute-0 sudo[230483]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:35 compute-0 sudo[230636]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvlsjytjstfojqykebfiwgbrvcxlxulu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622054.9067175-718-269269095084657/AnsiballZ_systemd_service.py'
Dec 01 20:47:35 compute-0 sudo[230636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:35 compute-0 python3.9[230638]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:35 compute-0 sudo[230636]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:35 compute-0 sudo[230751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:47:35 compute-0 sudo[230751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:35 compute-0 sudo[230751]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:35 compute-0 sudo[230822]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esdxxnfewreegrvdmyvpvnwekfslgkav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622055.5557866-718-101544465442099/AnsiballZ_systemd_service.py'
Dec 01 20:47:35 compute-0 sudo[230822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:35 compute-0 sudo[230804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:47:35 compute-0 sudo[230804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:35 compute-0 ceph-mon[75880]: pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:36 compute-0 python3.9[230839]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:36 compute-0 sudo[230822]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:36 compute-0 sudo[230804]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:47:36 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:47:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:47:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:47:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:47:36 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:47:36 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:47:36 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:47:36 compute-0 sudo[230974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:47:36 compute-0 sudo[230974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:36 compute-0 sudo[230974]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:36 compute-0 sudo[231008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:47:36 compute-0 sudo[231008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:36 compute-0 sudo[231074]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pundoworaykxcnqghgeclsqwlprxmitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622056.2905264-718-211989126723431/AnsiballZ_systemd_service.py'
Dec 01 20:47:36 compute-0 sudo[231074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:36 compute-0 podman[231087]: 2025-12-01 20:47:36.807606375 +0000 UTC m=+0.047377013 container create b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:47:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:47:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:47:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:47:36 compute-0 systemd[1]: Started libpod-conmon-b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc.scope.
Dec 01 20:47:36 compute-0 python3.9[231076]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:36 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:36 compute-0 podman[231087]: 2025-12-01 20:47:36.783824671 +0000 UTC m=+0.023595349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:47:36 compute-0 podman[231087]: 2025-12-01 20:47:36.885930986 +0000 UTC m=+0.125701684 container init b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 20:47:36 compute-0 podman[231087]: 2025-12-01 20:47:36.895100309 +0000 UTC m=+0.134870947 container start b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:47:36 compute-0 podman[231087]: 2025-12-01 20:47:36.898426919 +0000 UTC m=+0.138197577 container attach b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:47:36 compute-0 modest_meninsky[231103]: 167 167
Dec 01 20:47:36 compute-0 systemd[1]: libpod-b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc.scope: Deactivated successfully.
Dec 01 20:47:36 compute-0 sudo[231074]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:36 compute-0 podman[231087]: 2025-12-01 20:47:36.903965561 +0000 UTC m=+0.143736209 container died b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-40a0bba576fe4daa4afd598b0d53b17a0e3fd4c17c0135ef1faea153c7bd96b2-merged.mount: Deactivated successfully.
Dec 01 20:47:36 compute-0 podman[231087]: 2025-12-01 20:47:36.942135209 +0000 UTC m=+0.181905847 container remove b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:47:36 compute-0 systemd[1]: libpod-conmon-b37c56914206d81708a02a0db85b44424dcddc1a3eb25df9a233772eb8f1c8dc.scope: Deactivated successfully.
Dec 01 20:47:37 compute-0 podman[231175]: 2025-12-01 20:47:37.091611076 +0000 UTC m=+0.036998651 container create 430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:47:37 compute-0 systemd[1]: Started libpod-conmon-430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e.scope.
Dec 01 20:47:37 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55594636b3c5854a6109149fbcfd24c6dd8a5fdb377152fc1194c1c8ce0bbc82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55594636b3c5854a6109149fbcfd24c6dd8a5fdb377152fc1194c1c8ce0bbc82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55594636b3c5854a6109149fbcfd24c6dd8a5fdb377152fc1194c1c8ce0bbc82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55594636b3c5854a6109149fbcfd24c6dd8a5fdb377152fc1194c1c8ce0bbc82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55594636b3c5854a6109149fbcfd24c6dd8a5fdb377152fc1194c1c8ce0bbc82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:37 compute-0 podman[231175]: 2025-12-01 20:47:37.075436542 +0000 UTC m=+0.020824137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:47:37 compute-0 podman[231175]: 2025-12-01 20:47:37.19067855 +0000 UTC m=+0.136066175 container init 430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_herschel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:47:37 compute-0 podman[231175]: 2025-12-01 20:47:37.197519296 +0000 UTC m=+0.142906901 container start 430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_herschel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:47:37 compute-0 podman[231175]: 2025-12-01 20:47:37.201385733 +0000 UTC m=+0.146773328 container attach 430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:47:37 compute-0 sudo[231299]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnththevhnbwviyikhseituukesymmom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622057.0400038-718-41277638226880/AnsiballZ_systemd_service.py'
Dec 01 20:47:37 compute-0 sudo[231299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:37 compute-0 python3.9[231301]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:47:37 compute-0 sudo[231299]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:37 compute-0 wizardly_herschel[231241]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:47:37 compute-0 wizardly_herschel[231241]: --> All data devices are unavailable
Dec 01 20:47:37 compute-0 systemd[1]: libpod-430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e.scope: Deactivated successfully.
Dec 01 20:47:37 compute-0 podman[231175]: 2025-12-01 20:47:37.687963221 +0000 UTC m=+0.633350796 container died 430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-55594636b3c5854a6109149fbcfd24c6dd8a5fdb377152fc1194c1c8ce0bbc82-merged.mount: Deactivated successfully.
Dec 01 20:47:37 compute-0 podman[231175]: 2025-12-01 20:47:37.725321242 +0000 UTC m=+0.670708817 container remove 430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:47:37 compute-0 systemd[1]: libpod-conmon-430ff6ae22f3dea8cc13cc4d73e3e5c52d94c445899b19d2b6727dfdfce5257e.scope: Deactivated successfully.
Dec 01 20:47:37 compute-0 sudo[231008]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:37 compute-0 sudo[231354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:47:37 compute-0 sudo[231354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:37 compute-0 sudo[231354]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:37 compute-0 ceph-mon[75880]: pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:37 compute-0 sudo[231379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:47:37 compute-0 sudo[231379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:38 compute-0 sudo[231554]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezzyqneyxvqtfclqojfjgukhnkqofvpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622057.9253294-777-219597402146591/AnsiballZ_file.py'
Dec 01 20:47:38 compute-0 sudo[231554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:38 compute-0 podman[231524]: 2025-12-01 20:47:38.168297122 +0000 UTC m=+0.038086686 container create 2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:47:38 compute-0 systemd[1]: Started libpod-conmon-2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166.scope.
Dec 01 20:47:38 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:38 compute-0 podman[231524]: 2025-12-01 20:47:38.24800933 +0000 UTC m=+0.117798924 container init 2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:47:38 compute-0 podman[231524]: 2025-12-01 20:47:38.153452063 +0000 UTC m=+0.023241647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:47:38 compute-0 podman[231524]: 2025-12-01 20:47:38.257238984 +0000 UTC m=+0.127028558 container start 2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:47:38 compute-0 podman[231524]: 2025-12-01 20:47:38.260245763 +0000 UTC m=+0.130035357 container attach 2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 01 20:47:38 compute-0 agitated_cartwright[231559]: 167 167
Dec 01 20:47:38 compute-0 podman[231524]: 2025-12-01 20:47:38.264515454 +0000 UTC m=+0.134305018 container died 2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 01 20:47:38 compute-0 systemd[1]: libpod-2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166.scope: Deactivated successfully.
Dec 01 20:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9b2bb59919ec4934ba6ce33e9fc8c3f0b8fe4bf89c8b1c465436b4266da62a8-merged.mount: Deactivated successfully.
Dec 01 20:47:38 compute-0 podman[231524]: 2025-12-01 20:47:38.303518419 +0000 UTC m=+0.173307983 container remove 2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_cartwright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:47:38 compute-0 systemd[1]: libpod-conmon-2606b01a396a8e6c08c5f41d65498acea6b485d935fb553a5698791df209e166.scope: Deactivated successfully.
Dec 01 20:47:38 compute-0 python3.9[231556]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:38 compute-0 sudo[231554]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:38 compute-0 podman[231608]: 2025-12-01 20:47:38.498212456 +0000 UTC m=+0.043974180 container create 931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:47:38 compute-0 systemd[1]: Started libpod-conmon-931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a.scope.
Dec 01 20:47:38 compute-0 podman[231608]: 2025-12-01 20:47:38.480937466 +0000 UTC m=+0.026699200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:47:38 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dac465c2c7d08cfaf035dbd80fa4b5377be3569f5d142d7f1f9faa0fad4978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dac465c2c7d08cfaf035dbd80fa4b5377be3569f5d142d7f1f9faa0fad4978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dac465c2c7d08cfaf035dbd80fa4b5377be3569f5d142d7f1f9faa0fad4978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dac465c2c7d08cfaf035dbd80fa4b5377be3569f5d142d7f1f9faa0fad4978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:38 compute-0 podman[231608]: 2025-12-01 20:47:38.592823834 +0000 UTC m=+0.138585528 container init 931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ellis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 01 20:47:38 compute-0 podman[231608]: 2025-12-01 20:47:38.605461541 +0000 UTC m=+0.151223245 container start 931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:47:38 compute-0 podman[231608]: 2025-12-01 20:47:38.611724598 +0000 UTC m=+0.157486292 container attach 931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 01 20:47:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:38 compute-0 sudo[231754]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zftwillwrbfmbczlkcmcuzcqrnbrnwqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622058.5084534-777-100639285102515/AnsiballZ_file.py'
Dec 01 20:47:38 compute-0 sudo[231754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:38 compute-0 infallible_ellis[231668]: {
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:     "0": [
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:         {
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "devices": [
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "/dev/loop3"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             ],
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_name": "ceph_lv0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_size": "21470642176",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "name": "ceph_lv0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "tags": {
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cluster_name": "ceph",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.crush_device_class": "",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.encrypted": "0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.objectstore": "bluestore",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osd_id": "0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.type": "block",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.vdo": "0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.with_tpm": "0"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             },
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "type": "block",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "vg_name": "ceph_vg0"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:         }
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:     ],
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:     "1": [
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:         {
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "devices": [
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "/dev/loop4"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             ],
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_name": "ceph_lv1",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_size": "21470642176",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "name": "ceph_lv1",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "tags": {
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cluster_name": "ceph",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.crush_device_class": "",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.encrypted": "0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.objectstore": "bluestore",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osd_id": "1",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.type": "block",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.vdo": "0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.with_tpm": "0"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             },
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "type": "block",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "vg_name": "ceph_vg1"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:         }
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:     ],
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:     "2": [
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:         {
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "devices": [
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "/dev/loop5"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             ],
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_name": "ceph_lv2",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_size": "21470642176",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "name": "ceph_lv2",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "tags": {
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.cluster_name": "ceph",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.crush_device_class": "",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.encrypted": "0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.objectstore": "bluestore",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osd_id": "2",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.type": "block",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.vdo": "0",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:                 "ceph.with_tpm": "0"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             },
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "type": "block",
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:             "vg_name": "ceph_vg2"
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:         }
Dec 01 20:47:38 compute-0 infallible_ellis[231668]:     ]
Dec 01 20:47:38 compute-0 infallible_ellis[231668]: }
Dec 01 20:47:38 compute-0 systemd[1]: libpod-931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a.scope: Deactivated successfully.
Dec 01 20:47:38 compute-0 podman[231608]: 2025-12-01 20:47:38.910870887 +0000 UTC m=+0.456632591 container died 931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 01 20:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-76dac465c2c7d08cfaf035dbd80fa4b5377be3569f5d142d7f1f9faa0fad4978-merged.mount: Deactivated successfully.
Dec 01 20:47:38 compute-0 podman[231608]: 2025-12-01 20:47:38.950198414 +0000 UTC m=+0.495960118 container remove 931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:47:38 compute-0 python3.9[231756]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:38 compute-0 systemd[1]: libpod-conmon-931a829b2cf436e8876ca245ea5de80555c3366e74bf1c2078a99c7a2717fa8a.scope: Deactivated successfully.
Dec 01 20:47:38 compute-0 sudo[231754]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:38 compute-0 sudo[231379]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:39 compute-0 sudo[231793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:47:39 compute-0 sudo[231793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:39 compute-0 sudo[231793]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:39 compute-0 sudo[231839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:47:39 compute-0 sudo[231839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:39 compute-0 sudo[231983]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtugrhtwaghyhujvmnikkbwmlowfzjoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622059.092689-777-117442636504805/AnsiballZ_file.py'
Dec 01 20:47:39 compute-0 sudo[231983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:39 compute-0 podman[231984]: 2025-12-01 20:47:39.415499729 +0000 UTC m=+0.040856497 container create 7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_gould, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 01 20:47:39 compute-0 systemd[1]: Started libpod-conmon-7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1.scope.
Dec 01 20:47:39 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:39 compute-0 podman[231984]: 2025-12-01 20:47:39.397731314 +0000 UTC m=+0.023088102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:47:39 compute-0 podman[231984]: 2025-12-01 20:47:39.505554217 +0000 UTC m=+0.130911005 container init 7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:47:39 compute-0 podman[231984]: 2025-12-01 20:47:39.512990803 +0000 UTC m=+0.138347571 container start 7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_gould, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:47:39 compute-0 podman[231984]: 2025-12-01 20:47:39.515309159 +0000 UTC m=+0.140665927 container attach 7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_gould, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:47:39 compute-0 flamboyant_gould[232002]: 167 167
Dec 01 20:47:39 compute-0 systemd[1]: libpod-7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1.scope: Deactivated successfully.
Dec 01 20:47:39 compute-0 conmon[232002]: conmon 7cf6014151f6c602730e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1.scope/container/memory.events
Dec 01 20:47:39 compute-0 podman[231984]: 2025-12-01 20:47:39.520156919 +0000 UTC m=+0.145513697 container died 7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_gould, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-98de2051548126d8b40306ff7780ca117d4e0afe4c4ce4096d241f7d0fba404a-merged.mount: Deactivated successfully.
Dec 01 20:47:39 compute-0 podman[231984]: 2025-12-01 20:47:39.557833181 +0000 UTC m=+0.183189939 container remove 7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:47:39 compute-0 systemd[1]: libpod-conmon-7cf6014151f6c602730e73e7f82151dc080963be6726a572c908caba710df7d1.scope: Deactivated successfully.
Dec 01 20:47:39 compute-0 python3.9[231987]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:39 compute-0 sudo[231983]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:39 compute-0 podman[232048]: 2025-12-01 20:47:39.715730454 +0000 UTC m=+0.041452627 container create 4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lamport, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:47:39 compute-0 systemd[1]: Started libpod-conmon-4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332.scope.
Dec 01 20:47:39 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f127d925630335e9518e33806befb6d84251041cd49f94b5ba1498ef988dca13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f127d925630335e9518e33806befb6d84251041cd49f94b5ba1498ef988dca13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f127d925630335e9518e33806befb6d84251041cd49f94b5ba1498ef988dca13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:39 compute-0 podman[232048]: 2025-12-01 20:47:39.697104051 +0000 UTC m=+0.022826254 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f127d925630335e9518e33806befb6d84251041cd49f94b5ba1498ef988dca13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:47:39 compute-0 podman[232048]: 2025-12-01 20:47:39.804125018 +0000 UTC m=+0.129847211 container init 4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec 01 20:47:39 compute-0 podman[232048]: 2025-12-01 20:47:39.810723555 +0000 UTC m=+0.136445728 container start 4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lamport, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:47:39 compute-0 podman[232048]: 2025-12-01 20:47:39.813991933 +0000 UTC m=+0.139714136 container attach 4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:47:39 compute-0 ceph-mon[75880]: pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:40 compute-0 sudo[232206]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvciuzutflqwmbytpegmysgloqfmshku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622059.7517686-777-163393065397072/AnsiballZ_file.py'
Dec 01 20:47:40 compute-0 sudo[232206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:40 compute-0 python3.9[232208]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:40 compute-0 sudo[232206]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:47:40 compute-0 lvm[232372]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:47:40 compute-0 lvm[232370]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:47:40 compute-0 lvm[232372]: VG ceph_vg1 finished
Dec 01 20:47:40 compute-0 lvm[232370]: VG ceph_vg0 finished
Dec 01 20:47:40 compute-0 lvm[232387]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:47:40 compute-0 lvm[232387]: VG ceph_vg2 finished
Dec 01 20:47:40 compute-0 compassionate_lamport[232090]: {}
Dec 01 20:47:40 compute-0 sudo[232427]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aixhkshwavmorntzwsdjvvyylmcptilj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622060.4069054-777-46342773633842/AnsiballZ_file.py'
Dec 01 20:47:40 compute-0 sudo[232427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:40 compute-0 systemd[1]: libpod-4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332.scope: Deactivated successfully.
Dec 01 20:47:40 compute-0 systemd[1]: libpod-4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332.scope: Consumed 1.361s CPU time.
Dec 01 20:47:40 compute-0 podman[232048]: 2025-12-01 20:47:40.666674238 +0000 UTC m=+0.992396421 container died 4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f127d925630335e9518e33806befb6d84251041cd49f94b5ba1498ef988dca13-merged.mount: Deactivated successfully.
Dec 01 20:47:40 compute-0 podman[232048]: 2025-12-01 20:47:40.713029265 +0000 UTC m=+1.038751458 container remove 4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_lamport, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:47:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:40 compute-0 systemd[1]: libpod-conmon-4c8430dd9c1e6cb3f4a101820885fc102c8d930c57ec2fc6185a81c789d7b332.scope: Deactivated successfully.
Dec 01 20:47:40 compute-0 sudo[231839]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:47:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:47:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:47:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:47:40 compute-0 sudo[232442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:47:40 compute-0 sudo[232442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:47:40 compute-0 sudo[232442]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:40 compute-0 python3.9[232429]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:40 compute-0 sudo[232427]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:41 compute-0 sudo[232616]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozcvkqbqngsvrjlbcdmjxqqbcmqwretz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622060.99007-777-14541066732115/AnsiballZ_file.py'
Dec 01 20:47:41 compute-0 sudo[232616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:41 compute-0 python3.9[232618]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:41 compute-0 sudo[232616]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:41 compute-0 sudo[232768]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arvkssjmryetolyaylwetrntbzymoyrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622061.5222318-777-255564762940312/AnsiballZ_file.py'
Dec 01 20:47:41 compute-0 sudo[232768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:41 compute-0 ceph-mon[75880]: pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:47:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:47:41 compute-0 python3.9[232770]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:41 compute-0 sudo[232768]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:42 compute-0 sudo[232920]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bftibuetuhzlefncryfnwbimyccagfxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622062.0403078-777-126749013657901/AnsiballZ_file.py'
Dec 01 20:47:42 compute-0 sudo[232920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:42 compute-0 python3.9[232922]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:42 compute-0 sudo[232920]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:42 compute-0 sudo[233072]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bskzyvhnqfczttugqiasjnrmbsgbyani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622062.5838685-834-100048925762362/AnsiballZ_file.py'
Dec 01 20:47:42 compute-0 sudo[233072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:42 compute-0 python3.9[233074]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:42 compute-0 sudo[233072]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:43 compute-0 sudo[233224]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqogwbgtildnjflipwbgtnxiijbsywyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622063.1221147-834-7344658772596/AnsiballZ_file.py'
Dec 01 20:47:43 compute-0 sudo[233224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:43 compute-0 python3.9[233226]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:43 compute-0 sudo[233224]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:43 compute-0 ceph-mon[75880]: pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:43 compute-0 sudo[233376]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxsxxpfvhomhkdnuxzbsljqndkvroku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622063.671955-834-36204426356421/AnsiballZ_file.py'
Dec 01 20:47:43 compute-0 sudo[233376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:44 compute-0 python3.9[233378]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:44 compute-0 sudo[233376]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:47:44.347 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:47:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:47:44.348 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:47:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:47:44.348 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:47:44 compute-0 sudo[233539]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umpgyitowfcafqqgbtnekxqkmqnnqlsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622064.260747-834-183323736822222/AnsiballZ_file.py'
Dec 01 20:47:44 compute-0 sudo[233539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:44 compute-0 podman[233502]: 2025-12-01 20:47:44.507607312 +0000 UTC m=+0.053908028 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 01 20:47:44 compute-0 python3.9[233550]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:44 compute-0 sudo[233539]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:45 compute-0 sudo[233701]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwbyqvtrmeavuvsjoxwcawszmotesibb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622064.8276844-834-190886262297760/AnsiballZ_file.py'
Dec 01 20:47:45 compute-0 sudo[233701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:45 compute-0 python3.9[233703]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:45 compute-0 sudo[233701]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:45 compute-0 sudo[233853]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvmyioumiohvyzjzpbyxizuunfbmetfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622065.4459956-834-248755060832813/AnsiballZ_file.py'
Dec 01 20:47:45 compute-0 sudo[233853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:45 compute-0 python3.9[233855]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:45 compute-0 sudo[233853]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:45 compute-0 ceph-mon[75880]: pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:46 compute-0 podman[233909]: 2025-12-01 20:47:46.083103129 +0000 UTC m=+0.046675709 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 01 20:47:46 compute-0 sudo[234023]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjrvmudbmjrcvtuiehrdhahzfclboqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622065.9861717-834-41404558826128/AnsiballZ_file.py'
Dec 01 20:47:46 compute-0 sudo[234023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:46 compute-0 python3.9[234025]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:46 compute-0 sudo[234023]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:46 compute-0 sudo[234191]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auquynttnkyigmdoxbclemqqheoawtmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622066.5454617-834-65629507331227/AnsiballZ_file.py'
Dec 01 20:47:46 compute-0 sudo[234191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:46 compute-0 podman[234149]: 2025-12-01 20:47:46.824413682 +0000 UTC m=+0.080375570 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 01 20:47:46 compute-0 python3.9[234196]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:47:47 compute-0 sudo[234191]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:47 compute-0 sudo[234353]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfqceahutyjgzcvkxbknbcjfetsknxnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622067.2574298-892-20672760590024/AnsiballZ_command.py'
Dec 01 20:47:47 compute-0 sudo[234353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:47 compute-0 python3.9[234355]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:47 compute-0 sudo[234353]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:48 compute-0 ceph-mon[75880]: pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:48 compute-0 python3.9[234507]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 01 20:47:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:48 compute-0 sudo[234657]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwjaerdfuvkhnpiivdkfnefefdhqxymh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622068.7266605-910-229690411878865/AnsiballZ_systemd_service.py'
Dec 01 20:47:48 compute-0 sudo[234657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:49 compute-0 python3.9[234659]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:47:49 compute-0 systemd[1]: Reloading.
Dec 01 20:47:49 compute-0 systemd-rc-local-generator[234685]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:47:49 compute-0 systemd-sysv-generator[234689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:47:49 compute-0 sudo[234657]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:50 compute-0 sudo[234845]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwpkmkezzfvlkjhylbkjlzjgrfqhrqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622070.108887-918-36873375364463/AnsiballZ_command.py'
Dec 01 20:47:50 compute-0 sudo[234845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:50 compute-0 ceph-mon[75880]: pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:50 compute-0 python3.9[234847]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:50 compute-0 sudo[234845]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:50 compute-0 sudo[234998]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soecmqvvsheitbulgegrljmbgsoskuwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622070.6404834-918-196054863275648/AnsiballZ_command.py'
Dec 01 20:47:50 compute-0 sudo[234998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:51 compute-0 python3.9[235000]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:51 compute-0 sudo[234998]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:51 compute-0 sudo[235151]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohwclrwyobgwffcutzdisfmgrfchmvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622071.2241704-918-141374887915962/AnsiballZ_command.py'
Dec 01 20:47:51 compute-0 sudo[235151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:51 compute-0 python3.9[235153]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:51 compute-0 sudo[235151]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:51 compute-0 ceph-mon[75880]: pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:51 compute-0 sudo[235304]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjsmxqycvgeawpbltjxwmzcusocvgsjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622071.7593517-918-244597155578099/AnsiballZ_command.py'
Dec 01 20:47:51 compute-0 sudo[235304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:52 compute-0 python3.9[235306]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:52 compute-0 sudo[235304]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:52 compute-0 sudo[235457]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpuufhpdaugmrgegmiexxbujbdwcwifl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622072.2864141-918-151424956593581/AnsiballZ_command.py'
Dec 01 20:47:52 compute-0 sudo[235457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:52 compute-0 python3.9[235459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:52 compute-0 sudo[235457]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:53 compute-0 sudo[235610]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrkqetkezqncwwwpyjcsooeejjpenssv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622072.9999416-918-197445696887707/AnsiballZ_command.py'
Dec 01 20:47:53 compute-0 sudo[235610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:53 compute-0 python3.9[235612]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:53 compute-0 sudo[235610]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:54 compute-0 sudo[235763]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgfsvzhgxkfmggdostjibunfpkrhyuvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622073.8272333-918-178851618706191/AnsiballZ_command.py'
Dec 01 20:47:54 compute-0 sudo[235763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:54 compute-0 python3.9[235765]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:54 compute-0 sudo[235763]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:54 compute-0 ceph-mon[75880]: pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:55 compute-0 sudo[235916]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmtnpfmaimzlevfgmgimzsfjtfvcxtey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622074.850098-918-178411318867050/AnsiballZ_command.py'
Dec 01 20:47:55 compute-0 sudo[235916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:55 compute-0 python3.9[235918]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 01 20:47:55 compute-0 sudo[235916]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:55 compute-0 ceph-mon[75880]: pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:56 compute-0 sudo[236069]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqlhqodvajlhfrfdyoqbxqahdewzimi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622076.1967247-997-162274164207014/AnsiballZ_file.py'
Dec 01 20:47:56 compute-0 sudo[236069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:56 compute-0 python3.9[236071]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:47:56 compute-0 sudo[236069]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:56 compute-0 sudo[236221]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvcsveuzzoxzykzpnyqvusivqcjafwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622076.7503192-997-129478888300181/AnsiballZ_file.py'
Dec 01 20:47:56 compute-0 sudo[236221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:57 compute-0 python3.9[236223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:47:57 compute-0 sudo[236221]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:57 compute-0 sudo[236373]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaamgklytfhzhspiyqjinlkxvlmfvbag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622077.3449757-997-34467570354790/AnsiballZ_file.py'
Dec 01 20:47:57 compute-0 sudo[236373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:57 compute-0 python3.9[236375]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:47:57 compute-0 sudo[236373]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:47:58 compute-0 sudo[236525]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbuuupkmvvldgkokjxfwnmsfygkbkret ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622077.9723573-1019-76148029875026/AnsiballZ_file.py'
Dec 01 20:47:58 compute-0 sudo[236525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:58 compute-0 ceph-mon[75880]: pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:58 compute-0 python3.9[236527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:47:58 compute-0 sudo[236525]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:58 compute-0 sudo[236677]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzgtdjgwxuxmpjkzkzcviaukpwybhmge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622078.6133776-1019-265585441353484/AnsiballZ_file.py'
Dec 01 20:47:58 compute-0 sudo[236677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:59 compute-0 python3.9[236679]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:47:59 compute-0 sudo[236677]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:59 compute-0 sudo[236829]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uacvbnbgpukiudxtlnveshqsgpkjmvbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622079.1401868-1019-174465262392527/AnsiballZ_file.py'
Dec 01 20:47:59 compute-0 sudo[236829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:47:59 compute-0 ceph-mon[75880]: pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:47:59 compute-0 python3.9[236831]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:47:59 compute-0 sudo[236829]: pam_unix(sudo:session): session closed for user root
Dec 01 20:47:59 compute-0 sudo[236981]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lltiulxogrjmlpcgamgjraguvsutliby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622079.6973882-1019-72526611843065/AnsiballZ_file.py'
Dec 01 20:47:59 compute-0 sudo[236981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:00 compute-0 python3.9[236983]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:00 compute-0 sudo[236981]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:00 compute-0 sudo[237133]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqiuewwwupdcxexqnovpiqgvufdilzih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622080.2412786-1019-174508060481668/AnsiballZ_file.py'
Dec 01 20:48:00 compute-0 sudo[237133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:00 compute-0 python3.9[237135]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:00 compute-0 sudo[237133]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:01 compute-0 sudo[237285]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iamjnegndvgsrrtjltvjvskljbmedoyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622080.8104305-1019-16630328061690/AnsiballZ_file.py'
Dec 01 20:48:01 compute-0 sudo[237285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:01 compute-0 python3.9[237287]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:01 compute-0 sudo[237285]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:01 compute-0 sudo[237437]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvjquwyjuyebznqhzgvvtdumwhowofkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622081.3960695-1019-126142209982234/AnsiballZ_file.py'
Dec 01 20:48:01 compute-0 sudo[237437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:01 compute-0 python3.9[237439]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:01 compute-0 sudo[237437]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:02 compute-0 ceph-mon[75880]: pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:48:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:48:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:48:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:48:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:48:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:48:03 compute-0 ceph-mon[75880]: pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:05 compute-0 ceph-mon[75880]: pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:06 compute-0 sudo[237589]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmyhvcbprewmcybvkmahpwobfuirykzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622086.3714259-1208-33111396216069/AnsiballZ_getent.py'
Dec 01 20:48:06 compute-0 sudo[237589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:06 compute-0 python3.9[237591]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 01 20:48:06 compute-0 sudo[237589]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:07 compute-0 sudo[237742]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvvdpnvzmhxdwoidzglzgtwzviglmbhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622087.0917528-1216-146387460640148/AnsiballZ_group.py'
Dec 01 20:48:07 compute-0 sudo[237742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:07 compute-0 python3.9[237744]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 01 20:48:07 compute-0 groupadd[237745]: group added to /etc/group: name=nova, GID=42436
Dec 01 20:48:07 compute-0 groupadd[237745]: group added to /etc/gshadow: name=nova
Dec 01 20:48:07 compute-0 groupadd[237745]: new group: name=nova, GID=42436
Dec 01 20:48:07 compute-0 sudo[237742]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:08 compute-0 ceph-mon[75880]: pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:08 compute-0 sudo[237901]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywdnjtppkxlsqmujbqcpzxevectvvgnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622087.8516564-1224-72850913093324/AnsiballZ_user.py'
Dec 01 20:48:08 compute-0 sudo[237901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:08 compute-0 python3.9[237904]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 01 20:48:08 compute-0 useradd[237906]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/1
Dec 01 20:48:08 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:48:08 compute-0 useradd[237906]: add 'nova' to group 'libvirt'
Dec 01 20:48:08 compute-0 useradd[237906]: add 'nova' to shadow group 'libvirt'
Dec 01 20:48:08 compute-0 sudo[237901]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:08 compute-0 sshd-session[237900]: Received disconnect from 193.46.255.244 port 54110:11:  [preauth]
Dec 01 20:48:08 compute-0 sshd-session[237900]: Disconnected from authenticating user root 193.46.255.244 port 54110 [preauth]
Dec 01 20:48:09 compute-0 sshd-session[237938]: Accepted publickey for zuul from 192.168.122.30 port 38854 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:48:09 compute-0 systemd-logind[796]: New session 52 of user zuul.
Dec 01 20:48:09 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 01 20:48:09 compute-0 sshd-session[237938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:48:09 compute-0 sshd-session[237941]: Received disconnect from 192.168.122.30 port 38854:11: disconnected by user
Dec 01 20:48:09 compute-0 sshd-session[237941]: Disconnected from user zuul 192.168.122.30 port 38854
Dec 01 20:48:09 compute-0 sshd-session[237938]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:48:09 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 01 20:48:09 compute-0 systemd-logind[796]: Session 52 logged out. Waiting for processes to exit.
Dec 01 20:48:09 compute-0 systemd-logind[796]: Removed session 52.
Dec 01 20:48:10 compute-0 python3.9[238091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:10 compute-0 ceph-mon[75880]: pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:10 compute-0 python3.9[238212]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622089.7121744-1249-10591444939904/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:11 compute-0 python3.9[238362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:11 compute-0 python3.9[238438]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:12 compute-0 python3.9[238588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:12 compute-0 ceph-mon[75880]: pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:12 compute-0 python3.9[238709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622091.7086954-1249-181169846674876/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:13 compute-0 python3.9[238859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:13 compute-0 python3.9[238980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622092.6983974-1249-801450981526/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:14 compute-0 python3.9[239130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:14 compute-0 ceph-mon[75880]: pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:14 compute-0 podman[239225]: 2025-12-01 20:48:14.631532138 +0000 UTC m=+0.076926276 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:48:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:14 compute-0 python3.9[239262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622093.8753057-1249-116158235458678/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:15 compute-0 python3.9[239419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:15 compute-0 python3.9[239540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622094.9254673-1249-1755407626433/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:16 compute-0 ceph-mon[75880]: pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:16 compute-0 podman[239664]: 2025-12-01 20:48:16.30305363 +0000 UTC m=+0.054890020 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:48:16 compute-0 sudo[239707]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmhjefmxhfuvlowkqdoyhlrbuylxecxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622096.034696-1332-4102862179496/AnsiballZ_file.py'
Dec 01 20:48:16 compute-0 sudo[239707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:16 compute-0 python3.9[239712]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:48:16 compute-0 sudo[239707]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:16 compute-0 sudo[239877]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epfvhrpovzvpobsydxhnbqjurouyyfck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622096.6700811-1340-121440320280246/AnsiballZ_copy.py'
Dec 01 20:48:16 compute-0 sudo[239877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:17 compute-0 podman[239836]: 2025-12-01 20:48:17.070401961 +0000 UTC m=+0.136842511 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 01 20:48:17 compute-0 python3.9[239884]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:48:17 compute-0 sudo[239877]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:17 compute-0 sudo[240040]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrdwifuzmufzbzzzeynpbwdgmztiqimz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622097.3816278-1348-256605721237511/AnsiballZ_stat.py'
Dec 01 20:48:17 compute-0 sudo[240040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:17 compute-0 python3.9[240042]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:48:17 compute-0 sudo[240040]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:18 compute-0 sudo[240192]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkzrbkhcbmsmidyqhvcjjgppvfdfzgqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622098.023605-1356-132131188747149/AnsiballZ_stat.py'
Dec 01 20:48:18 compute-0 sudo[240192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:18 compute-0 ceph-mon[75880]: pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:18 compute-0 python3.9[240194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:18 compute-0 sudo[240192]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:18 compute-0 sudo[240315]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwxxxqciqglzxqdingzqhlwarmezxur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622098.023605-1356-132131188747149/AnsiballZ_copy.py'
Dec 01 20:48:18 compute-0 sudo[240315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:19 compute-0 python3.9[240317]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764622098.023605-1356-132131188747149/.source _original_basename=.9h_z0jlb follow=False checksum=35160323f92032dbb716264ec5e90fcd09cae7f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 01 20:48:19 compute-0 sudo[240315]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:19 compute-0 python3.9[240469]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:48:20 compute-0 ceph-mon[75880]: pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:20 compute-0 python3.9[240621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:21 compute-0 python3.9[240742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622100.1464946-1382-165201119127742/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:21 compute-0 python3.9[240892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 01 20:48:22 compute-0 python3.9[241013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764622101.3408015-1397-52820347921689/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 01 20:48:22 compute-0 ceph-mon[75880]: pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:22 compute-0 sudo[241163]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wprlfauaveeeulhakuxgourgcvddrscx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622102.5601306-1414-217514153153108/AnsiballZ_container_config_data.py'
Dec 01 20:48:22 compute-0 sudo[241163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:23 compute-0 python3.9[241165]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 01 20:48:23 compute-0 sudo[241163]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:23 compute-0 sudo[241315]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffzegakgrluoenfzsfditxdnhkywehen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622103.25239-1423-56682109040234/AnsiballZ_container_config_hash.py'
Dec 01 20:48:23 compute-0 sudo[241315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:23 compute-0 python3.9[241317]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 01 20:48:23 compute-0 sudo[241315]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:24 compute-0 sudo[241467]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atqsxbbtepgjjvnrfabrydyjoskeksrh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764622103.9530466-1433-121292642718956/AnsiballZ_edpm_container_manage.py'
Dec 01 20:48:24 compute-0 sudo[241467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:24 compute-0 ceph-mon[75880]: pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:24 compute-0 python3[241469]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 01 20:48:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:25 compute-0 ceph-mon[75880]: pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:27 compute-0 ceph-mon[75880]: pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:31 compute-0 ceph-mon[75880]: pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:48:32
Dec 01 20:48:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:48:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:48:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'cephfs.cephfs.data']
Dec 01 20:48:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:48:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:48:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:48:33 compute-0 ceph-mon[75880]: pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:35 compute-0 ceph-mon[75880]: pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:35 compute-0 podman[241482]: 2025-12-01 20:48:35.168883305 +0000 UTC m=+10.661779415 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 01 20:48:35 compute-0 podman[241564]: 2025-12-01 20:48:35.305999804 +0000 UTC m=+0.056345628 container create 22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 01 20:48:35 compute-0 podman[241564]: 2025-12-01 20:48:35.271514377 +0000 UTC m=+0.021860261 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 01 20:48:35 compute-0 python3[241469]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 01 20:48:35 compute-0 sudo[241467]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:35 compute-0 sudo[241752]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnbjhomfyowytgytvptfhgmpavkyqqou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622115.632643-1441-238152647626785/AnsiballZ_stat.py'
Dec 01 20:48:35 compute-0 sudo[241752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:36 compute-0 ceph-mon[75880]: pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:36 compute-0 python3.9[241754]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:48:36 compute-0 sudo[241752]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:36 compute-0 sudo[241906]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqsceqdeyfldjcqjgsvjkcizhrsgtgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622116.5367556-1453-166451794974411/AnsiballZ_container_config_data.py'
Dec 01 20:48:36 compute-0 sudo[241906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:37 compute-0 python3.9[241908]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 01 20:48:37 compute-0 sudo[241906]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:37 compute-0 sudo[242058]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcyovfxglzpbvwyqnmxtsyyyxjuhjykh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622117.2574873-1462-14272626996120/AnsiballZ_container_config_hash.py'
Dec 01 20:48:37 compute-0 sudo[242058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:37 compute-0 python3.9[242060]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 01 20:48:37 compute-0 sudo[242058]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:38 compute-0 ceph-mon[75880]: pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:38 compute-0 sudo[242210]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwvtppzyskllabtlehvinsaleuhdigm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764622117.9267838-1472-68648319770462/AnsiballZ_edpm_container_manage.py'
Dec 01 20:48:38 compute-0 sudo[242210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:38 compute-0 python3[242212]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 01 20:48:38 compute-0 podman[242247]: 2025-12-01 20:48:38.59025808 +0000 UTC m=+0.040517926 container create 28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251125, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 01 20:48:38 compute-0 podman[242247]: 2025-12-01 20:48:38.567073637 +0000 UTC m=+0.017333513 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 01 20:48:38 compute-0 python3[242212]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 01 20:48:38 compute-0 sudo[242210]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:39 compute-0 sudo[242435]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxksgvndnozjwfjdklkpqjuhyfsagjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622118.8496597-1480-110423307411904/AnsiballZ_stat.py'
Dec 01 20:48:39 compute-0 sudo[242435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:39 compute-0 python3.9[242437]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:48:39 compute-0 sudo[242435]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:39 compute-0 sudo[242589]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjqjdcjvtrtfnupfyrzlgnjbhmctzxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622119.48173-1489-122765402221951/AnsiballZ_file.py'
Dec 01 20:48:39 compute-0 sudo[242589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:48:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:40 compute-0 sudo[242592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:48:40 compute-0 sudo[242592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:40 compute-0 sudo[242592]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:40 compute-0 sudo[242617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:48:40 compute-0 sudo[242617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:41 compute-0 sudo[242617]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:48:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:48:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:48:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:48:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:48:41 compute-0 ceph-mon[75880]: pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:41 compute-0 python3.9[242591]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:48:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:48:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:48:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:48:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:48:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:48:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:48:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:48:41 compute-0 sudo[242589]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:41 compute-0 sudo[242673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:48:41 compute-0 sudo[242673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:41 compute-0 sudo[242673]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:41 compute-0 sudo[242721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:48:41 compute-0 sudo[242721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:41 compute-0 sudo[242893]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frgooesoabacdqrksttmvgbavgyelzam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622121.656317-1489-83798050372746/AnsiballZ_copy.py'
Dec 01 20:48:41 compute-0 sudo[242893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:41 compute-0 podman[242862]: 2025-12-01 20:48:41.961956699 +0000 UTC m=+0.036434402 container create 19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_gates, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:48:42 compute-0 systemd[1]: Started libpod-conmon-19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a.scope.
Dec 01 20:48:42 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:42 compute-0 podman[242862]: 2025-12-01 20:48:42.037097455 +0000 UTC m=+0.111575178 container init 19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_gates, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:48:42 compute-0 podman[242862]: 2025-12-01 20:48:41.945970082 +0000 UTC m=+0.020447795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:48:42 compute-0 podman[242862]: 2025-12-01 20:48:42.04449686 +0000 UTC m=+0.118974563 container start 19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:48:42 compute-0 podman[242862]: 2025-12-01 20:48:42.047443367 +0000 UTC m=+0.121921070 container attach 19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_gates, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:48:42 compute-0 competent_gates[242900]: 167 167
Dec 01 20:48:42 compute-0 systemd[1]: libpod-19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a.scope: Deactivated successfully.
Dec 01 20:48:42 compute-0 podman[242862]: 2025-12-01 20:48:42.051322594 +0000 UTC m=+0.125800297 container died 19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:48:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f90ac532d67d1dcca6c0824483b5028666ddfe6ce1b9cfaacfa09d267fb775fe-merged.mount: Deactivated successfully.
Dec 01 20:48:42 compute-0 podman[242862]: 2025-12-01 20:48:42.08759532 +0000 UTC m=+0.162073013 container remove 19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_gates, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:48:42 compute-0 systemd[1]: libpod-conmon-19a61c342351f0604981443b82953a5a692b23d29d1b0d96681a814d12f1eb6a.scope: Deactivated successfully.
Dec 01 20:48:42 compute-0 python3.9[242897]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764622121.656317-1489-83798050372746/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 01 20:48:42 compute-0 sudo[242893]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:42 compute-0 podman[242924]: 2025-12-01 20:48:42.233000772 +0000 UTC m=+0.038614353 container create e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jones, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:48:42 compute-0 systemd[1]: Started libpod-conmon-e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724.scope.
Dec 01 20:48:42 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9002a2fde5ed6f01ef3fe47e312ebdc601f9fdbf90c79b696c7a9dc77952db86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9002a2fde5ed6f01ef3fe47e312ebdc601f9fdbf90c79b696c7a9dc77952db86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9002a2fde5ed6f01ef3fe47e312ebdc601f9fdbf90c79b696c7a9dc77952db86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9002a2fde5ed6f01ef3fe47e312ebdc601f9fdbf90c79b696c7a9dc77952db86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9002a2fde5ed6f01ef3fe47e312ebdc601f9fdbf90c79b696c7a9dc77952db86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:42 compute-0 podman[242924]: 2025-12-01 20:48:42.216811619 +0000 UTC m=+0.022425220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:48:42 compute-0 podman[242924]: 2025-12-01 20:48:42.325442019 +0000 UTC m=+0.131055620 container init e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 01 20:48:42 compute-0 podman[242924]: 2025-12-01 20:48:42.332512903 +0000 UTC m=+0.138126484 container start e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jones, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:48:42 compute-0 podman[242924]: 2025-12-01 20:48:42.336522174 +0000 UTC m=+0.142135785 container attach e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jones, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:48:42 compute-0 sudo[243019]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubobexueeiqsjyomkbsraxtxrnowknwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622121.656317-1489-83798050372746/AnsiballZ_systemd.py'
Dec 01 20:48:42 compute-0 sudo[243019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:42 compute-0 ceph-mon[75880]: pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:48:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:48:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:48:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:48:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:48:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:48:42 compute-0 python3.9[243021]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 01 20:48:42 compute-0 systemd[1]: Reloading.
Dec 01 20:48:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:42 compute-0 systemd-sysv-generator[243062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:48:42 compute-0 systemd-rc-local-generator[243057]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:48:42 compute-0 kind_jones[242968]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:48:42 compute-0 kind_jones[242968]: --> All data devices are unavailable
Dec 01 20:48:42 compute-0 podman[242924]: 2025-12-01 20:48:42.80332011 +0000 UTC m=+0.608933691 container died e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jones, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:48:42 compute-0 systemd[1]: libpod-e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724.scope: Deactivated successfully.
Dec 01 20:48:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9002a2fde5ed6f01ef3fe47e312ebdc601f9fdbf90c79b696c7a9dc77952db86-merged.mount: Deactivated successfully.
Dec 01 20:48:43 compute-0 podman[242924]: 2025-12-01 20:48:43.002704441 +0000 UTC m=+0.808318022 container remove e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 01 20:48:43 compute-0 systemd[1]: libpod-conmon-e6f09fb1a3969f78ba77831cd4463a83797004c923035a403da713f9a1b0a724.scope: Deactivated successfully.
Dec 01 20:48:43 compute-0 sudo[243019]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:43 compute-0 sudo[242721]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.058820) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622123058863, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 994, "num_deletes": 250, "total_data_size": 980684, "memory_usage": 997536, "flush_reason": "Manual Compaction"}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622123066677, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 600082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11125, "largest_seqno": 12118, "table_properties": {"data_size": 596199, "index_size": 1535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9850, "raw_average_key_size": 20, "raw_value_size": 587928, "raw_average_value_size": 1194, "num_data_blocks": 70, "num_entries": 492, "num_filter_entries": 492, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764622023, "oldest_key_time": 1764622023, "file_creation_time": 1764622123, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 7897 microseconds, and 2556 cpu microseconds.
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.066723) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 600082 bytes OK
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.066739) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.067701) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.067716) EVENT_LOG_v1 {"time_micros": 1764622123067711, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.067736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 975972, prev total WAL file size 975972, number of live WAL files 2.
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.068238) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(586KB)], [29(5891KB)]
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622123068299, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6632704, "oldest_snapshot_seqno": -1}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3249 keys, 4888450 bytes, temperature: kUnknown
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622123099744, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4888450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4865504, "index_size": 13749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 75208, "raw_average_key_size": 23, "raw_value_size": 4806038, "raw_average_value_size": 1479, "num_data_blocks": 610, "num_entries": 3249, "num_filter_entries": 3249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622123, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.100227) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4888450 bytes
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.101616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.3 rd, 154.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.8 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(19.2) write-amplify(8.1) OK, records in: 3718, records dropped: 469 output_compression: NoCompression
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.101653) EVENT_LOG_v1 {"time_micros": 1764622123101639, "job": 12, "event": "compaction_finished", "compaction_time_micros": 31687, "compaction_time_cpu_micros": 12099, "output_level": 6, "num_output_files": 1, "total_output_size": 4888450, "num_input_records": 3718, "num_output_records": 3249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622123102761, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:48:43 compute-0 sudo[243085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622123104512, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.068118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.104678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.104685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.104687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.104689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:48:43 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:48:43.104691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:48:43 compute-0 sudo[243085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:43 compute-0 sudo[243085]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:43 compute-0 sudo[243133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:48:43 compute-0 sudo[243133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:43 compute-0 sudo[243208]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzwnvgfjbrlrmceqfjaoiseyhzfzeicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622121.656317-1489-83798050372746/AnsiballZ_systemd.py'
Dec 01 20:48:43 compute-0 sudo[243208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:43 compute-0 podman[243223]: 2025-12-01 20:48:43.460844272 +0000 UTC m=+0.046471913 container create 7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 20:48:43 compute-0 systemd[1]: Started libpod-conmon-7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe.scope.
Dec 01 20:48:43 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:43 compute-0 python3.9[243210]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 01 20:48:43 compute-0 podman[243223]: 2025-12-01 20:48:43.436519019 +0000 UTC m=+0.022146710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:48:43 compute-0 podman[243223]: 2025-12-01 20:48:43.535905046 +0000 UTC m=+0.121532747 container init 7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:48:43 compute-0 podman[243223]: 2025-12-01 20:48:43.540788156 +0000 UTC m=+0.126415807 container start 7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_thompson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:48:43 compute-0 podman[243223]: 2025-12-01 20:48:43.543784205 +0000 UTC m=+0.129411856 container attach 7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 20:48:43 compute-0 amazing_thompson[243239]: 167 167
Dec 01 20:48:43 compute-0 podman[243223]: 2025-12-01 20:48:43.548132939 +0000 UTC m=+0.133760590 container died 7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_thompson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:48:43 compute-0 systemd[1]: libpod-7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe.scope: Deactivated successfully.
Dec 01 20:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5948531cab3b6efd3dd98614422574c78eb8cecfa6f5819860f6df0f2e350e4-merged.mount: Deactivated successfully.
Dec 01 20:48:43 compute-0 podman[243223]: 2025-12-01 20:48:43.584319222 +0000 UTC m=+0.169946863 container remove 7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:48:43 compute-0 systemd[1]: Reloading.
Dec 01 20:48:43 compute-0 systemd-rc-local-generator[243292]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 01 20:48:43 compute-0 systemd-sysv-generator[243296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 01 20:48:43 compute-0 podman[243274]: 2025-12-01 20:48:43.75781134 +0000 UTC m=+0.039876426 container create 66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 20:48:43 compute-0 podman[243274]: 2025-12-01 20:48:43.741852373 +0000 UTC m=+0.023917479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:48:43 compute-0 systemd[1]: libpod-conmon-7cb723ff4f021dcfe33f6fa1817c826668aa7cb1a325dff1e6be5177b55c44fe.scope: Deactivated successfully.
Dec 01 20:48:43 compute-0 systemd[1]: Started libpod-conmon-66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623.scope.
Dec 01 20:48:43 compute-0 systemd[1]: Starting nova_compute container...
Dec 01 20:48:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1860871458a95ec89159546ec10ea29f4d286e01f440ddcdff8ac4b9bb7a779/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1860871458a95ec89159546ec10ea29f4d286e01f440ddcdff8ac4b9bb7a779/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1860871458a95ec89159546ec10ea29f4d286e01f440ddcdff8ac4b9bb7a779/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1860871458a95ec89159546ec10ea29f4d286e01f440ddcdff8ac4b9bb7a779/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 podman[243274]: 2025-12-01 20:48:44.049486142 +0000 UTC m=+0.331551268 container init 66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:48:44 compute-0 podman[243274]: 2025-12-01 20:48:44.056750582 +0000 UTC m=+0.338815678 container start 66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goodall, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 20:48:44 compute-0 ceph-mon[75880]: pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:44 compute-0 podman[243274]: 2025-12-01 20:48:44.064808508 +0000 UTC m=+0.346873614 container attach 66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:48:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:44 compute-0 podman[243319]: 2025-12-01 20:48:44.127319378 +0000 UTC m=+0.110947198 container init 28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:48:44 compute-0 podman[243319]: 2025-12-01 20:48:44.137878496 +0000 UTC m=+0.121506296 container start 28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 01 20:48:44 compute-0 podman[243319]: nova_compute
Dec 01 20:48:44 compute-0 nova_compute[243338]: + sudo -E kolla_set_configs
Dec 01 20:48:44 compute-0 systemd[1]: Started nova_compute container.
Dec 01 20:48:44 compute-0 sudo[243208]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Validating config file
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying service configuration files
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Deleting /etc/ceph
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Creating directory /etc/ceph
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/ceph
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Writing out command to execute
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:44 compute-0 nova_compute[243338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 01 20:48:44 compute-0 nova_compute[243338]: ++ cat /run_command
Dec 01 20:48:44 compute-0 nova_compute[243338]: + CMD=nova-compute
Dec 01 20:48:44 compute-0 nova_compute[243338]: + ARGS=
Dec 01 20:48:44 compute-0 nova_compute[243338]: + sudo kolla_copy_cacerts
Dec 01 20:48:44 compute-0 nova_compute[243338]: + [[ ! -n '' ]]
Dec 01 20:48:44 compute-0 nova_compute[243338]: + . kolla_extend_start
Dec 01 20:48:44 compute-0 nova_compute[243338]: Running command: 'nova-compute'
Dec 01 20:48:44 compute-0 nova_compute[243338]: + echo 'Running command: '\''nova-compute'\'''
Dec 01 20:48:44 compute-0 nova_compute[243338]: + umask 0022
Dec 01 20:48:44 compute-0 nova_compute[243338]: + exec nova-compute
Dec 01 20:48:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:48:44.348 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:48:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:48:44.348 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:48:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:48:44.348 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:48:44 compute-0 amazing_goodall[243318]: {
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:     "0": [
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:         {
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "devices": [
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "/dev/loop3"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             ],
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_name": "ceph_lv0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_size": "21470642176",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "name": "ceph_lv0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "tags": {
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cluster_name": "ceph",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.crush_device_class": "",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.encrypted": "0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.objectstore": "bluestore",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osd_id": "0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.type": "block",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.vdo": "0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.with_tpm": "0"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             },
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "type": "block",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "vg_name": "ceph_vg0"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:         }
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:     ],
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:     "1": [
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:         {
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "devices": [
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "/dev/loop4"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             ],
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_name": "ceph_lv1",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_size": "21470642176",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "name": "ceph_lv1",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "tags": {
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cluster_name": "ceph",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.crush_device_class": "",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.encrypted": "0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.objectstore": "bluestore",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osd_id": "1",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.type": "block",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.vdo": "0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.with_tpm": "0"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             },
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "type": "block",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "vg_name": "ceph_vg1"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:         }
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:     ],
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:     "2": [
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:         {
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "devices": [
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "/dev/loop5"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             ],
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_name": "ceph_lv2",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_size": "21470642176",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "name": "ceph_lv2",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "tags": {
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.cluster_name": "ceph",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.crush_device_class": "",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.encrypted": "0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.objectstore": "bluestore",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osd_id": "2",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.type": "block",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.vdo": "0",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:                 "ceph.with_tpm": "0"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             },
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "type": "block",
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:             "vg_name": "ceph_vg2"
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:         }
Dec 01 20:48:44 compute-0 amazing_goodall[243318]:     ]
Dec 01 20:48:44 compute-0 amazing_goodall[243318]: }
Dec 01 20:48:44 compute-0 systemd[1]: libpod-66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623.scope: Deactivated successfully.
Dec 01 20:48:44 compute-0 podman[243274]: 2025-12-01 20:48:44.40438002 +0000 UTC m=+0.686445106 container died 66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:48:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1860871458a95ec89159546ec10ea29f4d286e01f440ddcdff8ac4b9bb7a779-merged.mount: Deactivated successfully.
Dec 01 20:48:44 compute-0 podman[243274]: 2025-12-01 20:48:44.458748662 +0000 UTC m=+0.740813738 container remove 66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_goodall, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:48:44 compute-0 systemd[1]: libpod-conmon-66cccd84c98239db585400abf95276598b521c867b7004fe642a773de7bc8623.scope: Deactivated successfully.
Dec 01 20:48:44 compute-0 sudo[243133]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:44 compute-0 sudo[243390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:48:44 compute-0 sudo[243390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:44 compute-0 sudo[243390]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:44 compute-0 sudo[243439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:48:44 compute-0 sudo[243439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:44 compute-0 podman[243539]: 2025-12-01 20:48:44.829870324 +0000 UTC m=+0.086155871 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 01 20:48:44 compute-0 podman[243596]: 2025-12-01 20:48:44.890189852 +0000 UTC m=+0.043246226 container create f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:48:44 compute-0 systemd[1]: Started libpod-conmon-f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742.scope.
Dec 01 20:48:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:44 compute-0 python3.9[243577]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:48:44 compute-0 podman[243596]: 2025-12-01 20:48:44.871678872 +0000 UTC m=+0.024735246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:48:44 compute-0 podman[243596]: 2025-12-01 20:48:44.966374733 +0000 UTC m=+0.119431117 container init f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:48:44 compute-0 podman[243596]: 2025-12-01 20:48:44.972850287 +0000 UTC m=+0.125906661 container start f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:48:44 compute-0 podman[243596]: 2025-12-01 20:48:44.975431952 +0000 UTC m=+0.128488336 container attach f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:48:44 compute-0 optimistic_jackson[243612]: 167 167
Dec 01 20:48:44 compute-0 systemd[1]: libpod-f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742.scope: Deactivated successfully.
Dec 01 20:48:44 compute-0 podman[243596]: 2025-12-01 20:48:44.979154024 +0000 UTC m=+0.132210398 container died f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:48:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd4d169be20a6ff50b48bddaf7f78e1a187ec7bbcff706c63047d42b87d51b4-merged.mount: Deactivated successfully.
Dec 01 20:48:45 compute-0 podman[243596]: 2025-12-01 20:48:45.012869845 +0000 UTC m=+0.165926219 container remove f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_jackson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:48:45 compute-0 systemd[1]: libpod-conmon-f4abf615995090eb1b151baa7dcda67123cfd28f93089cb41f36a92e27f12742.scope: Deactivated successfully.
Dec 01 20:48:45 compute-0 podman[243660]: 2025-12-01 20:48:45.22903747 +0000 UTC m=+0.094073941 container create 17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_montalcini, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:48:45 compute-0 podman[243660]: 2025-12-01 20:48:45.156567641 +0000 UTC m=+0.021604132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:48:45 compute-0 systemd[1]: Started libpod-conmon-17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35.scope.
Dec 01 20:48:45 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654c6a1add89ed32803792d9fe72243267b8d3cd4ee08a3f8c2cda6d7bf27fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654c6a1add89ed32803792d9fe72243267b8d3cd4ee08a3f8c2cda6d7bf27fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654c6a1add89ed32803792d9fe72243267b8d3cd4ee08a3f8c2cda6d7bf27fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a654c6a1add89ed32803792d9fe72243267b8d3cd4ee08a3f8c2cda6d7bf27fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:45 compute-0 podman[243660]: 2025-12-01 20:48:45.308724477 +0000 UTC m=+0.173760968 container init 17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 01 20:48:45 compute-0 podman[243660]: 2025-12-01 20:48:45.314381373 +0000 UTC m=+0.179417844 container start 17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_montalcini, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:48:45 compute-0 podman[243660]: 2025-12-01 20:48:45.317039771 +0000 UTC m=+0.182076242 container attach 17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_montalcini, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:48:45 compute-0 python3.9[243806]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:48:45 compute-0 lvm[243955]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:48:45 compute-0 lvm[243958]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:48:45 compute-0 lvm[243955]: VG ceph_vg0 finished
Dec 01 20:48:45 compute-0 lvm[243958]: VG ceph_vg1 finished
Dec 01 20:48:45 compute-0 lvm[243977]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:48:45 compute-0 lvm[243977]: VG ceph_vg2 finished
Dec 01 20:48:45 compute-0 lvm[243983]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:48:45 compute-0 lvm[243983]: VG ceph_vg1 finished
Dec 01 20:48:46 compute-0 lvm[243993]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:48:46 compute-0 lvm[243993]: VG ceph_vg1 finished
Dec 01 20:48:46 compute-0 angry_montalcini[243705]: {}
Dec 01 20:48:46 compute-0 systemd[1]: libpod-17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35.scope: Deactivated successfully.
Dec 01 20:48:46 compute-0 systemd[1]: libpod-17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35.scope: Consumed 1.180s CPU time.
Dec 01 20:48:46 compute-0 podman[243660]: 2025-12-01 20:48:46.054267639 +0000 UTC m=+0.919304120 container died 17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:48:46 compute-0 ceph-mon[75880]: pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a654c6a1add89ed32803792d9fe72243267b8d3cd4ee08a3f8c2cda6d7bf27fa-merged.mount: Deactivated successfully.
Dec 01 20:48:46 compute-0 podman[243660]: 2025-12-01 20:48:46.102277381 +0000 UTC m=+0.967313872 container remove 17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:48:46 compute-0 systemd[1]: libpod-conmon-17c4a773c7ab10a86c168503c098013132f7386c9c102f997afb7a15deddff35.scope: Deactivated successfully.
Dec 01 20:48:46 compute-0 sudo[243439]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:48:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:48:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:48:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:48:46 compute-0 sudo[244050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:48:46 compute-0 sudo[244050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:48:46 compute-0 sudo[244050]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:46 compute-0 python3.9[244044]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 01 20:48:46 compute-0 nova_compute[243338]: 2025-12-01 20:48:46.286 243342 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 01 20:48:46 compute-0 nova_compute[243338]: 2025-12-01 20:48:46.287 243342 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 01 20:48:46 compute-0 nova_compute[243338]: 2025-12-01 20:48:46.287 243342 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 01 20:48:46 compute-0 nova_compute[243338]: 2025-12-01 20:48:46.287 243342 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 01 20:48:46 compute-0 nova_compute[243338]: 2025-12-01 20:48:46.417 243342 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:48:46 compute-0 nova_compute[243338]: 2025-12-01 20:48:46.444 243342 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:48:46 compute-0 nova_compute[243338]: 2025-12-01 20:48:46.445 243342 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 01 20:48:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:47 compute-0 sudo[244239]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxnrtgskhmclkcsotmdtzdubbwvbmpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622126.5066218-1549-35237531179380/AnsiballZ_podman_container.py'
Dec 01 20:48:47 compute-0 sudo[244239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:47 compute-0 podman[244202]: 2025-12-01 20:48:47.045499779 +0000 UTC m=+0.055744548 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.061 243342 INFO nova.virt.driver [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 01 20:48:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:48:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.174 243342 INFO nova.compute.provider_config [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.197 243342 DEBUG oslo_concurrency.lockutils [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.198 243342 DEBUG oslo_concurrency.lockutils [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.198 243342 DEBUG oslo_concurrency.lockutils [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.199 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.199 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.199 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.199 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.199 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.200 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.200 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.200 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.200 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.200 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.201 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.201 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.201 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.201 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.202 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.202 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.202 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.202 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.202 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.203 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.203 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.203 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.203 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.203 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.204 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.204 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.204 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.204 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.204 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.205 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.205 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.205 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.205 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.206 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.206 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.206 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.206 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.206 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.207 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.207 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.207 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.207 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.208 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.208 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.208 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.208 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.208 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.209 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.209 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.209 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.209 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.209 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.210 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.210 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.210 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.210 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.210 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.211 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.211 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.211 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.211 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.211 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.212 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.212 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.212 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.212 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.212 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.213 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.213 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.213 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.213 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.213 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.214 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.214 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.214 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.214 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.215 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.215 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.215 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.215 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.215 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.216 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.216 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.216 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.216 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.217 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.217 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.217 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.217 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.217 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.218 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.218 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.218 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.218 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.218 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.219 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.219 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.219 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.219 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.220 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.220 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.220 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.220 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.220 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.221 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.221 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.221 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.221 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.221 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.222 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.222 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.222 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.222 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.223 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.223 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.223 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.223 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.223 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.223 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.224 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.224 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.224 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.224 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.225 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.225 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.225 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.225 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.225 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.226 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.226 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.226 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.226 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.226 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.227 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.227 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.227 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.227 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.227 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.227 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.228 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.228 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.228 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.228 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.228 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.228 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.228 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.229 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.229 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.229 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.229 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.229 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.229 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.229 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.230 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.230 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.230 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.230 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.230 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.230 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.230 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.231 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.231 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.231 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.231 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.231 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.231 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.231 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.232 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.232 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.232 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.232 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.232 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.232 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.232 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.233 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.233 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.233 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.233 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.233 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.233 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.233 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.234 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.234 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.234 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.234 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.234 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.234 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.234 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.235 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.235 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.235 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.235 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.235 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.235 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.235 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.236 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.237 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.237 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.237 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.237 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.237 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.237 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.237 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.238 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.238 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.238 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.238 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.238 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.238 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.238 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.239 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.239 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.239 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.239 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.239 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.239 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.239 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.240 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.241 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.241 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.241 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.241 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.241 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.241 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.242 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.242 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.242 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.242 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.242 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.242 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.242 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.243 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.244 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.244 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.244 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.244 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.244 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.244 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.244 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.245 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.245 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.245 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.245 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.245 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.245 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.245 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.246 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.247 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.247 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.247 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.247 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.247 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.247 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.247 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.248 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.248 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.248 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.248 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.248 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.248 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.248 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.249 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.249 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.249 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.249 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.249 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.249 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.249 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.250 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.250 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.250 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.250 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.250 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.250 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.250 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.251 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.252 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.252 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.252 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.252 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.252 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.252 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.252 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.253 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.253 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.253 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.253 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.253 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.253 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.253 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.254 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.255 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.255 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.255 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.255 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.255 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.255 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.255 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.256 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.256 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.256 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.256 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.256 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.256 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.256 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.257 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.257 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.257 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.257 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.257 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.257 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.257 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.258 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.258 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.258 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.258 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.258 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.259 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.260 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.260 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.260 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.260 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.260 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.260 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.260 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.261 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.261 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.261 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.261 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.261 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.261 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.261 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.262 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.262 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.262 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.262 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.262 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.262 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.262 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.263 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.263 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.263 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.263 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.263 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.263 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.263 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.264 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.265 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.265 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.265 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.265 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.265 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.265 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.265 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.266 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.266 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.266 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.266 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.266 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.266 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.266 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.267 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.268 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.268 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.268 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.268 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.268 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.268 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.268 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.269 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.269 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.269 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.269 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.269 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.269 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.269 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.270 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.270 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.270 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.270 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.270 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.270 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.271 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.271 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.271 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.271 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.271 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.271 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.271 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.272 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.272 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.272 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.272 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.272 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.272 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.273 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.273 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.273 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.273 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.273 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.273 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.273 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.274 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.275 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.275 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.275 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.275 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.275 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.275 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.275 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.276 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.276 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.276 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.276 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.276 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.276 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.276 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.277 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.277 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.277 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.277 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.277 243342 WARNING oslo_config.cfg [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 01 20:48:47 compute-0 nova_compute[243338]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 01 20:48:47 compute-0 nova_compute[243338]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 01 20:48:47 compute-0 nova_compute[243338]: and ``live_migration_inbound_addr`` respectively.
Dec 01 20:48:47 compute-0 nova_compute[243338]: ).  Its value may be silently ignored in the future.
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.277 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.278 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.278 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.278 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.278 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.278 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.278 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.278 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.279 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.279 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.279 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.279 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.279 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.279 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.279 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.280 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.280 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.280 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.280 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rbd_secret_uuid        = dcf60a89-bba0-58b0-a1bf-d4bde723201b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.280 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.280 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.280 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.281 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.281 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.281 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.281 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.281 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.281 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.281 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.282 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.282 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.282 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.282 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.282 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.282 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.282 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.283 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.283 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.283 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.283 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.283 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.283 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.283 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.284 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.284 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.284 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.284 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.284 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.284 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.284 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.285 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.285 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.285 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.285 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.285 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.285 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.286 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.286 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.286 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.286 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.286 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.287 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.287 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.287 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.287 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.287 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.287 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.288 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.288 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.288 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.288 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.288 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.289 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.289 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.289 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.289 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.289 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.290 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.290 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.290 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.290 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.290 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.291 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.291 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.291 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.291 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.291 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.292 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.292 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.292 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.292 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.292 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.293 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.293 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.293 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.293 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.293 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.293 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.294 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.294 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.294 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.294 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.294 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.295 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.295 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.295 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.295 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.295 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.295 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.296 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.296 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.296 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.296 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.296 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.296 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.297 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.297 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.297 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.297 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.297 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.298 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.298 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.298 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.298 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.298 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.299 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.299 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.299 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.299 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.299 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.299 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.300 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.300 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.300 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.300 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.300 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.301 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.301 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.301 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.301 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.302 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.302 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.302 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.302 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.302 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 python3.9[244247]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.302 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.303 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.303 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.303 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.303 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.303 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.304 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.304 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.304 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.304 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.304 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.305 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.305 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.305 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.305 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.305 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.306 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.306 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.306 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.306 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.306 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.306 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.306 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.307 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.307 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.307 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.307 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.307 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.307 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.307 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.308 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.308 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.308 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.308 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.308 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.309 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.309 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.309 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.309 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.309 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.309 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.309 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.310 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.310 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.310 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.310 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.310 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.310 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.311 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.311 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.311 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.311 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.311 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.311 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.312 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.312 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.312 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.312 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.312 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.312 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.312 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.313 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.313 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.313 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.313 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.313 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.313 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.314 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.315 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.315 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.315 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.315 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.315 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.315 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.315 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.316 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.316 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.316 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.316 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.316 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.316 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.316 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.317 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.318 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.318 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.318 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.318 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.318 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.318 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.319 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.319 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.319 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.319 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.319 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.319 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.319 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.320 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.320 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.320 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.320 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.320 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.320 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.321 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.321 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.321 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.321 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.321 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.321 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.321 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.322 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.322 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.322 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.322 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.322 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.322 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.322 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.323 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.323 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.323 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.323 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.323 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.323 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.324 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.324 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.324 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.324 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.324 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.324 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.324 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.325 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.325 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.325 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.325 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.325 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.325 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.325 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.326 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.326 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.326 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.326 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.326 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.326 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.326 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.327 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.327 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.327 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.327 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.327 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.327 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.327 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.328 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.328 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.328 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.328 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.328 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.328 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.328 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.329 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.329 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.329 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.329 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.329 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.329 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.329 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.330 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.330 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.330 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.330 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.330 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.330 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.330 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.331 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.331 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.331 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.331 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.331 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.331 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.331 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.332 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.332 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.332 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.332 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.332 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.332 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.332 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.333 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.334 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.334 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.334 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.334 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.334 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.334 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.334 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.335 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.335 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.335 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.335 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.335 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.335 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.335 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.336 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.337 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.337 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.337 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.337 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.337 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.337 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.337 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.338 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.338 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.338 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.338 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.338 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.338 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.338 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.339 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.340 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.340 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.340 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.340 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.340 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.340 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.341 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.342 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.342 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.342 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.342 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.342 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.342 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.342 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.343 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.343 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.343 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.343 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.343 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.343 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.343 243342 DEBUG oslo_service.service [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.344 243342 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 01 20:48:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.356 243342 DEBUG nova.virt.libvirt.host [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.356 243342 DEBUG nova.virt.libvirt.host [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.357 243342 DEBUG nova.virt.libvirt.host [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.357 243342 DEBUG nova.virt.libvirt.host [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 01 20:48:47 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 01 20:48:47 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 01 20:48:47 compute-0 sudo[244239]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.424 243342 DEBUG nova.virt.libvirt.host [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6b3abf87f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.428 243342 DEBUG nova.virt.libvirt.host [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6b3abf87f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.429 243342 INFO nova.virt.libvirt.driver [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Connection event '1' reason 'None'
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.446 243342 WARNING nova.virt.libvirt.driver [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 01 20:48:47 compute-0 nova_compute[243338]: 2025-12-01 20:48:47.447 243342 DEBUG nova.virt.libvirt.volume.mount [None req-803af852-765d-4306-8efd-062cd7becab6 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 01 20:48:47 compute-0 podman[244293]: 2025-12-01 20:48:47.492919566 +0000 UTC m=+0.108963093 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 01 20:48:47 compute-0 sudo[244497]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbxuhexwycqdrvrnrmuryxjkqzcalkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622127.589447-1557-229077047591791/AnsiballZ_systemd.py'
Dec 01 20:48:47 compute-0 sudo[244497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:48 compute-0 python3.9[244499]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 01 20:48:48 compute-0 ceph-mon[75880]: pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:48 compute-0 systemd[1]: Stopping nova_compute container...
Dec 01 20:48:48 compute-0 nova_compute[243338]: 2025-12-01 20:48:48.286 243342 DEBUG oslo_concurrency.lockutils [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 01 20:48:48 compute-0 nova_compute[243338]: 2025-12-01 20:48:48.287 243342 DEBUG oslo_concurrency.lockutils [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 01 20:48:48 compute-0 nova_compute[243338]: 2025-12-01 20:48:48.287 243342 DEBUG oslo_concurrency.lockutils [None req-5e528da0-293f-4ff6-ae09-b39626dbfb3d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 01 20:48:48 compute-0 virtqemud[244294]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 01 20:48:48 compute-0 systemd[1]: libpod-28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2.scope: Deactivated successfully.
Dec 01 20:48:48 compute-0 virtqemud[244294]: hostname: compute-0
Dec 01 20:48:48 compute-0 virtqemud[244294]: End of file while reading data: Input/output error
Dec 01 20:48:48 compute-0 systemd[1]: libpod-28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2.scope: Consumed 2.996s CPU time.
Dec 01 20:48:48 compute-0 podman[244511]: 2025-12-01 20:48:48.707470097 +0000 UTC m=+0.473661222 container died 28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 01 20:48:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2-userdata-shm.mount: Deactivated successfully.
Dec 01 20:48:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e-merged.mount: Deactivated successfully.
Dec 01 20:48:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:49 compute-0 podman[244511]: 2025-12-01 20:48:49.547468594 +0000 UTC m=+1.313659699 container cleanup 28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 01 20:48:49 compute-0 podman[244511]: nova_compute
Dec 01 20:48:49 compute-0 ceph-mon[75880]: pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:49 compute-0 podman[244539]: nova_compute
Dec 01 20:48:49 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 01 20:48:49 compute-0 systemd[1]: Stopped nova_compute container.
Dec 01 20:48:49 compute-0 systemd[1]: Starting nova_compute container...
Dec 01 20:48:49 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ca273817169149a44e777268387a1470915e481633eab0d185f9aa7df2a4a1e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:49 compute-0 podman[244553]: 2025-12-01 20:48:49.704578751 +0000 UTC m=+0.077717602 container init 28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Dec 01 20:48:49 compute-0 podman[244553]: 2025-12-01 20:48:49.713898718 +0000 UTC m=+0.087037549 container start 28dbed3fb607852ced74411dc01ff72b2a570a9b6f9cb5ab7ed4d306bc3c84d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:48:49 compute-0 podman[244553]: nova_compute
Dec 01 20:48:49 compute-0 nova_compute[244568]: + sudo -E kolla_set_configs
Dec 01 20:48:49 compute-0 systemd[1]: Started nova_compute container.
Dec 01 20:48:49 compute-0 sudo[244497]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Validating config file
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying service configuration files
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /etc/ceph
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Creating directory /etc/ceph
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/ceph
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Writing out command to execute
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:49 compute-0 nova_compute[244568]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 01 20:48:49 compute-0 nova_compute[244568]: ++ cat /run_command
Dec 01 20:48:49 compute-0 nova_compute[244568]: + CMD=nova-compute
Dec 01 20:48:49 compute-0 nova_compute[244568]: + ARGS=
Dec 01 20:48:49 compute-0 nova_compute[244568]: + sudo kolla_copy_cacerts
Dec 01 20:48:49 compute-0 nova_compute[244568]: + [[ ! -n '' ]]
Dec 01 20:48:49 compute-0 nova_compute[244568]: + . kolla_extend_start
Dec 01 20:48:49 compute-0 nova_compute[244568]: Running command: 'nova-compute'
Dec 01 20:48:49 compute-0 nova_compute[244568]: + echo 'Running command: '\''nova-compute'\'''
Dec 01 20:48:49 compute-0 nova_compute[244568]: + umask 0022
Dec 01 20:48:49 compute-0 nova_compute[244568]: + exec nova-compute
Dec 01 20:48:50 compute-0 sudo[244729]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpdvzitqcxtylqfvvpdtvvrxikqlijgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764622129.9316428-1566-36208708018843/AnsiballZ_podman_container.py'
Dec 01 20:48:50 compute-0 sudo[244729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:48:50 compute-0 python3.9[244731]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 01 20:48:50 compute-0 systemd[1]: Started libpod-conmon-22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1.scope.
Dec 01 20:48:50 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0fae60e10bbbb689d720073023fb7ea0d6f35d575559a508fb0943b33023ca/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0fae60e10bbbb689d720073023fb7ea0d6f35d575559a508fb0943b33023ca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0fae60e10bbbb689d720073023fb7ea0d6f35d575559a508fb0943b33023ca/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 01 20:48:50 compute-0 podman[244757]: 2025-12-01 20:48:50.732651366 +0000 UTC m=+0.144347588 container init 22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute_init)
Dec 01 20:48:50 compute-0 podman[244757]: 2025-12-01 20:48:50.740551476 +0000 UTC m=+0.152247668 container start 22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 01 20:48:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:50 compute-0 python3.9[244731]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Applying nova statedir ownership
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 01 20:48:50 compute-0 nova_compute_init[244779]: INFO:nova_statedir:Nova statedir ownership complete
Dec 01 20:48:50 compute-0 systemd[1]: libpod-22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1.scope: Deactivated successfully.
Dec 01 20:48:50 compute-0 podman[244780]: 2025-12-01 20:48:50.795381464 +0000 UTC m=+0.027503078 container died 22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 01 20:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1-userdata-shm.mount: Deactivated successfully.
Dec 01 20:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0fae60e10bbbb689d720073023fb7ea0d6f35d575559a508fb0943b33023ca-merged.mount: Deactivated successfully.
Dec 01 20:48:50 compute-0 podman[244790]: 2025-12-01 20:48:50.871656988 +0000 UTC m=+0.068246911 container cleanup 22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 01 20:48:50 compute-0 systemd[1]: libpod-conmon-22a53ac74756232b7838eac35bc7c427528dea43f004c5d19bff5976336b71d1.scope: Deactivated successfully.
Dec 01 20:48:50 compute-0 sudo[244729]: pam_unix(sudo:session): session closed for user root
Dec 01 20:48:51 compute-0 sshd-session[214539]: Connection closed by 192.168.122.30 port 33788
Dec 01 20:48:51 compute-0 sshd-session[214536]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:48:51 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 01 20:48:51 compute-0 systemd[1]: session-51.scope: Consumed 2min 13.065s CPU time.
Dec 01 20:48:51 compute-0 systemd-logind[796]: Session 51 logged out. Waiting for processes to exit.
Dec 01 20:48:51 compute-0 systemd-logind[796]: Removed session 51.
Dec 01 20:48:51 compute-0 nova_compute[244568]: 2025-12-01 20:48:51.786 244572 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 01 20:48:51 compute-0 nova_compute[244568]: 2025-12-01 20:48:51.787 244572 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 01 20:48:51 compute-0 nova_compute[244568]: 2025-12-01 20:48:51.787 244572 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 01 20:48:51 compute-0 nova_compute[244568]: 2025-12-01 20:48:51.787 244572 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 01 20:48:51 compute-0 ceph-mon[75880]: pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:51 compute-0 nova_compute[244568]: 2025-12-01 20:48:51.925 244572 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:48:51 compute-0 nova_compute[244568]: 2025-12-01 20:48:51.947 244572 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:48:51 compute-0 nova_compute[244568]: 2025-12-01 20:48:51.948 244572 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.400 244572 INFO nova.virt.driver [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.493 244572 INFO nova.compute.provider_config [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.506 244572 DEBUG oslo_concurrency.lockutils [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.507 244572 DEBUG oslo_concurrency.lockutils [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.507 244572 DEBUG oslo_concurrency.lockutils [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.507 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.507 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.507 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.507 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.508 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.508 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.508 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.508 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.508 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.508 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.509 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.509 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.509 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.509 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.509 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.509 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.509 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.510 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.510 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.510 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.510 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.510 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.510 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.510 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.511 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.511 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.511 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.511 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.511 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.511 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.512 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.512 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.512 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.512 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.512 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.512 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.513 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.513 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.513 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.513 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.513 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.513 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.514 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.514 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.514 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.514 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.514 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.514 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.515 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.515 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.515 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.515 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.515 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.515 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.515 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.516 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.516 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.516 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.516 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.516 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.516 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.517 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.517 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.517 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.517 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.517 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.517 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.517 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.518 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.518 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.518 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.518 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.518 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.518 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.519 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.519 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.519 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.519 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.519 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.519 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.520 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.521 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.521 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.521 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.521 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.521 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.521 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.522 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.522 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.522 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.522 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.522 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.522 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.522 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.523 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.524 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.524 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.524 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.524 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.524 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.524 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.525 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.525 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.525 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.525 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.525 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.525 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.525 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.526 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.527 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.527 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.527 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.527 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.527 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.527 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.527 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.528 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.528 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.528 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.528 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.528 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.528 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.528 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.529 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.529 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.529 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.529 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.529 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.529 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.529 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.530 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.530 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.530 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.530 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.530 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.530 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.530 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.531 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.531 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.531 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.531 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.531 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.531 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.532 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.532 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.532 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.532 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.532 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.532 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.532 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.533 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.533 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.533 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.533 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.533 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.533 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.533 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.534 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.534 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.534 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.534 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.534 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.534 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.534 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.535 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.535 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.535 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.535 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.535 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.535 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.535 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.536 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.536 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.536 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.536 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.536 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.536 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.536 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.537 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.537 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.537 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.537 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.537 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.537 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.537 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.538 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.538 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.538 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.538 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.538 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.538 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.538 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.539 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.539 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.539 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.539 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.539 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.539 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.540 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.540 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.540 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.540 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.540 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.540 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.540 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.541 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.541 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.541 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.541 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.541 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.541 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.541 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.542 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.542 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.542 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.542 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.542 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.542 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.542 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.543 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.543 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.543 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.543 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.543 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.543 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.543 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.544 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.544 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.544 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.544 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.544 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.544 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.544 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.545 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.545 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.545 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.545 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.545 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.545 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.545 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.546 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.546 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.546 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.546 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.546 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.546 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.546 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.547 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.547 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.547 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.547 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.547 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.547 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.548 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.548 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.548 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.548 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.548 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.548 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.549 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.549 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.549 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.549 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.549 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.549 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.549 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.550 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.550 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.550 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.550 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.550 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.550 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.550 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.551 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.551 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.551 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.551 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.551 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.551 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.551 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.552 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.552 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.552 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.552 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.552 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.552 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.553 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.553 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.553 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.553 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.553 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.553 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.553 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.554 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.554 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.554 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.554 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.554 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.554 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.554 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.555 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.555 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.555 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.555 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.555 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.555 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.555 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.556 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.556 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.556 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.556 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.556 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.556 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.556 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.557 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.557 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.557 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.557 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.557 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.557 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.557 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.558 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.558 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.558 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.558 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.558 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.558 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.559 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.559 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.559 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.559 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.559 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.559 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.559 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.560 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.560 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.560 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.560 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.560 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.561 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.561 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.561 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.561 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.561 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.561 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.561 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.562 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.562 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.562 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.562 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.562 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.562 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.562 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.563 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.563 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.563 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.563 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.563 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.563 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.563 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.564 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.564 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.564 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.564 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.564 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.564 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.564 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.565 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.565 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.565 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.565 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.565 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.565 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.565 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.566 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.566 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.566 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.566 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.566 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.566 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.566 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.567 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.567 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.567 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.567 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.567 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.567 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.567 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.568 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.568 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.568 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.568 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.568 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.568 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.569 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.570 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.570 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.570 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.570 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.570 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.570 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.570 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.571 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.571 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.571 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.571 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.571 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.571 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.571 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.572 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.572 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.572 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.572 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.572 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.572 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.572 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.573 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.574 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.574 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.574 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.574 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.574 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.574 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.574 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.575 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.575 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.575 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.575 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.575 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.575 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.575 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.576 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.576 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.576 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.576 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.576 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.576 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.576 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.577 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.577 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.577 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.577 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.577 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.577 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.577 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.578 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.578 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.578 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.578 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.578 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.578 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.578 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.579 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.579 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.579 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.579 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.579 244572 WARNING oslo_config.cfg [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 01 20:48:52 compute-0 nova_compute[244568]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 01 20:48:52 compute-0 nova_compute[244568]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 01 20:48:52 compute-0 nova_compute[244568]: and ``live_migration_inbound_addr`` respectively.
Dec 01 20:48:52 compute-0 nova_compute[244568]: ).  Its value may be silently ignored in the future.
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.579 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.580 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.580 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.580 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.580 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.580 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.581 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.581 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.581 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.581 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.581 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.581 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.581 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.582 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.582 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.582 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.582 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.582 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.582 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rbd_secret_uuid        = dcf60a89-bba0-58b0-a1bf-d4bde723201b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.582 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.583 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.583 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.583 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.583 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.583 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.583 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.583 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.584 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.584 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.584 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.584 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.584 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.584 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.585 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.585 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.585 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.585 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.585 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.585 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.585 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.586 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.586 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.586 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.586 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.586 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.586 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.586 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.587 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.587 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.587 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.587 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.587 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.587 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.588 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.588 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.588 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.588 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.588 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.588 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.588 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.589 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.590 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.590 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.590 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.590 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.590 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.590 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.590 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.591 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.591 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.591 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.591 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.591 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.591 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.592 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.592 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.592 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.592 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.592 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.592 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.592 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.593 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.593 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.593 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.593 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.593 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.593 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.593 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.594 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.594 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.594 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.594 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.594 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.594 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.594 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.595 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.595 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.595 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.595 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.595 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.595 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.595 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.596 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.596 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.596 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.596 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.596 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.596 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.596 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.597 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.597 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.597 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.597 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.597 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.597 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.598 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.598 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.598 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.598 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.598 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.598 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.598 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.599 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.599 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.599 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.599 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.599 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.599 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.600 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.600 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.600 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.600 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.600 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.601 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.601 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.601 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.601 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.601 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.601 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.602 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.602 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.602 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.602 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.602 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.603 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.603 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.603 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.603 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.603 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.604 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.604 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.604 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.604 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.604 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.604 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.604 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.605 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.605 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.605 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.605 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.605 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.605 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.606 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.606 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.606 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.606 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.606 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.606 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.606 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.607 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.607 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.607 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.607 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.607 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.607 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.608 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.608 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.608 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.608 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.608 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.609 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.609 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.609 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.609 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.609 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.609 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.610 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.610 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.610 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.610 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.610 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.610 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.611 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.611 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.611 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.611 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.611 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.611 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.612 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.612 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.612 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.612 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.612 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.612 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.613 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.613 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.613 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.613 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.613 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.613 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.613 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.614 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.614 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.614 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.614 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.614 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.614 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.614 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.615 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.615 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.615 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.615 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.615 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.615 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.616 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.616 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.616 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.616 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.616 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.617 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.617 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.617 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.617 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.617 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.617 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.617 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.618 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.618 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.618 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.618 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.618 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.618 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.619 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.619 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.619 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.619 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.619 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.619 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.620 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.620 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.620 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.620 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.620 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.620 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.621 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.621 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.621 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.621 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.621 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.621 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.622 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.622 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.622 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.622 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.622 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.622 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.622 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.623 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.623 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.623 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.623 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.623 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.623 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.623 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.624 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.624 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.624 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.624 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.624 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.624 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.625 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.625 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.625 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.625 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.625 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.625 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.626 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.626 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.626 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.626 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.626 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.626 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.627 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.627 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.627 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.627 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.627 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.628 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.628 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.628 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.628 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.628 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.629 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.629 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.629 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.629 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.629 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.629 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.630 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.630 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.630 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.630 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.630 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.630 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.630 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.631 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.631 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.631 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.631 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.631 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.632 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.632 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.632 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.632 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.632 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.633 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.633 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.633 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.633 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.633 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.633 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.633 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.634 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.634 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.634 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.634 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.634 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.634 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.635 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.635 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.635 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.635 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.635 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.635 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.635 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.636 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.637 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.637 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.637 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.637 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.637 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.637 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.638 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.638 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.638 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.638 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.638 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.638 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.638 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.639 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.639 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.639 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.639 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.639 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.639 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.639 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.640 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.640 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.640 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.640 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.640 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.640 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.641 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.641 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.641 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.641 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.641 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.641 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.642 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.642 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.642 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.642 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.642 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.642 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.643 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.643 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.643 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.643 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.643 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.643 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.644 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.644 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.644 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.644 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.644 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.644 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.645 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.645 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.645 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.645 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.645 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.645 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.645 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.646 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.646 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.646 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.646 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.646 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.646 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.646 244572 DEBUG oslo_service.service [None req-3daf4122-bd5f-46cd-ac6c-bc5415b1808d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.647 244572 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.708 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.709 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.709 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.709 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.726 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1e3c95a580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.730 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1e3c95a580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.731 244572 INFO nova.virt.libvirt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Connection event '1' reason 'None'
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.739 244572 INFO nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Libvirt host capabilities <capabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]: 
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <host>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <uuid>6d7269d0-ae07-4538-adba-52753671c0ef</uuid>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <arch>x86_64</arch>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model>EPYC-Rome-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <vendor>AMD</vendor>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <microcode version='16777317'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <signature family='23' model='49' stepping='0'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='x2apic'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='tsc-deadline'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='osxsave'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='hypervisor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='tsc_adjust'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='spec-ctrl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='stibp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='arch-capabilities'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='cmp_legacy'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='topoext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='virt-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='lbrv'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='tsc-scale'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='vmcb-clean'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='pause-filter'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='pfthreshold'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='svme-addr-chk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='rdctl-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='skip-l1dfl-vmentry'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='mds-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature name='pschange-mc-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <pages unit='KiB' size='4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <pages unit='KiB' size='2048'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <pages unit='KiB' size='1048576'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <power_management>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <suspend_mem/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </power_management>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <iommu support='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <migration_features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <live/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <uri_transports>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <uri_transport>tcp</uri_transport>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <uri_transport>rdma</uri_transport>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </uri_transports>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </migration_features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <topology>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <cells num='1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <cell id='0'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           <memory unit='KiB'>7864312</memory>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           <pages unit='KiB' size='4'>1966078</pages>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           <pages unit='KiB' size='2048'>0</pages>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           <distances>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <sibling id='0' value='10'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           </distances>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           <cpus num='8'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:           </cpus>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         </cell>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </cells>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </topology>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <cache>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </cache>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <secmodel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model>selinux</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <doi>0</doi>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </secmodel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <secmodel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model>dac</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <doi>0</doi>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </secmodel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </host>
Dec 01 20:48:52 compute-0 nova_compute[244568]: 
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <guest>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <os_type>hvm</os_type>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <arch name='i686'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <wordsize>32</wordsize>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <domain type='qemu'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <domain type='kvm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </arch>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <pae/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <nonpae/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <acpi default='on' toggle='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <apic default='on' toggle='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <cpuselection/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <deviceboot/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <disksnapshot default='on' toggle='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <externalSnapshot/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </guest>
Dec 01 20:48:52 compute-0 nova_compute[244568]: 
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <guest>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <os_type>hvm</os_type>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <arch name='x86_64'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <wordsize>64</wordsize>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <domain type='qemu'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <domain type='kvm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </arch>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <acpi default='on' toggle='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <apic default='on' toggle='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <cpuselection/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <deviceboot/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <disksnapshot default='on' toggle='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <externalSnapshot/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </guest>
Dec 01 20:48:52 compute-0 nova_compute[244568]: 
Dec 01 20:48:52 compute-0 nova_compute[244568]: </capabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]: 
Dec 01 20:48:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.749 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.759 244572 WARNING nova.virt.libvirt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.759 244572 DEBUG nova.virt.libvirt.volume.mount [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.777 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 01 20:48:52 compute-0 nova_compute[244568]: <domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <path>/usr/libexec/qemu-kvm</path>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <domain>kvm</domain>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <arch>i686</arch>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <vcpu max='240'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <iothreads supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <os supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='firmware'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <loader supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>rom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pflash</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='readonly'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>yes</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='secure'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </loader>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </os>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-passthrough' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='hostPassthroughMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='maximum' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='maximumMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-model' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <vendor>AMD</vendor>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='x2apic'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-deadline'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='hypervisor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc_adjust'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='spec-ctrl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='stibp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='cmp_legacy'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='overflow-recov'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='succor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='amd-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='virt-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lbrv'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-scale'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='vmcb-clean'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='flushbyasid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pause-filter'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pfthreshold'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='svme-addr-chk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='disable' name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='custom' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Dhyana-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-128'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-256'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-512'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v6'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v7'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <memoryBacking supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='sourceType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>anonymous</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>memfd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </memoryBacking>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <disk supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='diskDevice'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>disk</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cdrom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>floppy</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>lun</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ide</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>fdc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>sata</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <graphics supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vnc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egl-headless</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </graphics>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <video supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='modelType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vga</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cirrus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>none</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>bochs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ramfb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </video>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hostdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='mode'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>subsystem</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='startupPolicy'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>mandatory</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>requisite</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>optional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='subsysType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pci</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='capsType'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='pciBackend'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hostdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <rng supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>random</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </rng>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <filesystem supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='driverType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>path</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>handle</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtiofs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </filesystem>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <tpm supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-tis</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-crb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emulator</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>external</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendVersion'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>2.0</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </tpm>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <redirdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </redirdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <channel supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </channel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <crypto supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </crypto>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <interface supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>passt</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </interface>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <panic supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>isa</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>hyperv</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </panic>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <console supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>null</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dev</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pipe</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stdio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>udp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tcp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu-vdagent</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </console>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <gic supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <vmcoreinfo supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <genid supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backingStoreInput supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backup supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <async-teardown supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <ps2 supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sev supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sgx supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hyperv supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='features'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>relaxed</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vapic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>spinlocks</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vpindex</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>runtime</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>synic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stimer</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reset</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vendor_id</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>frequencies</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reenlightenment</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tlbflush</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ipi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>avic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emsr_bitmap</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>xmm_input</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <spinlocks>4095</spinlocks>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <stimer_direct>on</stimer_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_direct>on</tlbflush_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_extended>on</tlbflush_extended>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hyperv>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <launchSecurity supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='sectype'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tdx</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </launchSecurity>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </features>
Dec 01 20:48:52 compute-0 nova_compute[244568]: </domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.785 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 01 20:48:52 compute-0 nova_compute[244568]: <domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <path>/usr/libexec/qemu-kvm</path>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <domain>kvm</domain>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <arch>i686</arch>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <vcpu max='4096'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <iothreads supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <os supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='firmware'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <loader supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>rom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pflash</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='readonly'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>yes</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='secure'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </loader>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </os>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-passthrough' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='hostPassthroughMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='maximum' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='maximumMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-model' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <vendor>AMD</vendor>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='x2apic'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-deadline'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='hypervisor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc_adjust'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='spec-ctrl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='stibp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='cmp_legacy'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='overflow-recov'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='succor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='amd-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='virt-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lbrv'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-scale'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='vmcb-clean'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='flushbyasid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pause-filter'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pfthreshold'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='svme-addr-chk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='disable' name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='custom' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Dhyana-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-128'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-256'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-512'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v6'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v7'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <memoryBacking supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='sourceType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>anonymous</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>memfd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </memoryBacking>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <disk supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='diskDevice'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>disk</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cdrom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>floppy</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>lun</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>fdc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>sata</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <graphics supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vnc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egl-headless</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </graphics>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <video supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='modelType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vga</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cirrus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>none</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>bochs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ramfb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </video>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hostdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='mode'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>subsystem</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='startupPolicy'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>mandatory</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>requisite</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>optional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='subsysType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pci</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='capsType'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='pciBackend'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hostdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <rng supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>random</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </rng>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <filesystem supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='driverType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>path</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>handle</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtiofs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </filesystem>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <tpm supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-tis</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-crb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emulator</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>external</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendVersion'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>2.0</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </tpm>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <redirdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </redirdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <channel supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </channel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <crypto supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </crypto>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <interface supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>passt</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </interface>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <panic supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>isa</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>hyperv</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </panic>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <console supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>null</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dev</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pipe</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stdio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>udp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tcp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu-vdagent</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </console>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <gic supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <vmcoreinfo supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <genid supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backingStoreInput supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backup supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <async-teardown supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <ps2 supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sev supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sgx supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hyperv supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='features'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>relaxed</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vapic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>spinlocks</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vpindex</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>runtime</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>synic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stimer</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reset</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vendor_id</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>frequencies</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reenlightenment</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tlbflush</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ipi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>avic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emsr_bitmap</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>xmm_input</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <spinlocks>4095</spinlocks>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <stimer_direct>on</stimer_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_direct>on</tlbflush_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_extended>on</tlbflush_extended>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hyperv>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <launchSecurity supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='sectype'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tdx</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </launchSecurity>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </features>
Dec 01 20:48:52 compute-0 nova_compute[244568]: </domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.817 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.822 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 01 20:48:52 compute-0 nova_compute[244568]: <domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <path>/usr/libexec/qemu-kvm</path>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <domain>kvm</domain>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <arch>x86_64</arch>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <vcpu max='4096'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <iothreads supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <os supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='firmware'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>efi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <loader supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>rom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pflash</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='readonly'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>yes</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='secure'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>yes</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </loader>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </os>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-passthrough' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='hostPassthroughMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='maximum' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='maximumMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-model' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <vendor>AMD</vendor>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='x2apic'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-deadline'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='hypervisor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc_adjust'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='spec-ctrl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='stibp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='cmp_legacy'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='overflow-recov'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='succor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='amd-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='virt-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lbrv'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-scale'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='vmcb-clean'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='flushbyasid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pause-filter'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pfthreshold'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='svme-addr-chk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='disable' name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='custom' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Dhyana-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-128'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-256'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-512'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v6'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v7'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <memoryBacking supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='sourceType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>anonymous</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>memfd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </memoryBacking>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <disk supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='diskDevice'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>disk</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cdrom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>floppy</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>lun</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>fdc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>sata</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <graphics supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vnc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egl-headless</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </graphics>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <video supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='modelType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vga</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cirrus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>none</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>bochs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ramfb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </video>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hostdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='mode'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>subsystem</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='startupPolicy'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>mandatory</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>requisite</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>optional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='subsysType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pci</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='capsType'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='pciBackend'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hostdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <rng supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>random</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </rng>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <filesystem supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='driverType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>path</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>handle</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtiofs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </filesystem>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <tpm supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-tis</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-crb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emulator</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>external</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendVersion'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>2.0</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </tpm>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <redirdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </redirdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <channel supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </channel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <crypto supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </crypto>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <interface supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>passt</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </interface>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <panic supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>isa</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>hyperv</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </panic>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <console supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>null</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dev</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pipe</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stdio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>udp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tcp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu-vdagent</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </console>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <gic supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <vmcoreinfo supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <genid supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backingStoreInput supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backup supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <async-teardown supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <ps2 supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sev supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sgx supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hyperv supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='features'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>relaxed</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vapic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>spinlocks</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vpindex</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>runtime</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>synic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stimer</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reset</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vendor_id</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>frequencies</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reenlightenment</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tlbflush</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ipi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>avic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emsr_bitmap</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>xmm_input</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <spinlocks>4095</spinlocks>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <stimer_direct>on</stimer_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_direct>on</tlbflush_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_extended>on</tlbflush_extended>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hyperv>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <launchSecurity supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='sectype'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tdx</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </launchSecurity>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </features>
Dec 01 20:48:52 compute-0 nova_compute[244568]: </domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.879 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 01 20:48:52 compute-0 nova_compute[244568]: <domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <path>/usr/libexec/qemu-kvm</path>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <domain>kvm</domain>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <arch>x86_64</arch>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <vcpu max='240'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <iothreads supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <os supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='firmware'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <loader supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>rom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pflash</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='readonly'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>yes</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='secure'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>no</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </loader>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </os>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-passthrough' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='hostPassthroughMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='maximum' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='maximumMigratable'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>on</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>off</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='host-model' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <vendor>AMD</vendor>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='x2apic'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-deadline'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='hypervisor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc_adjust'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='spec-ctrl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='stibp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='cmp_legacy'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='overflow-recov'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='succor'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='amd-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='virt-ssbd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lbrv'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='tsc-scale'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='vmcb-clean'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='flushbyasid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pause-filter'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='pfthreshold'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='svme-addr-chk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <feature policy='disable' name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <mode name='custom' supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Broadwell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cascadelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Cooperlake-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Denverton-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Dhyana-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Genoa-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='auto-ibrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Milan-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amd-psfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='no-nested-data-bp'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='null-sel-clr-base'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='stibp-always-on'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-Rome-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='EPYC-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='GraniteRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-128'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-256'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx10-512'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='prefetchiti'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Haswell-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-noTSX'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v6'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Icelake-Server-v7'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='IvyBridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='KnightsMill-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4fmaps'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-4vnniw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512er'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512pf'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G4-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Opteron_G5-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fma4'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tbm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xop'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SapphireRapids-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='amx-tile'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-bf16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-fp16'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512-vpopcntdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bitalg'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vbmi2'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrc'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fzrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='la57'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='taa-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='tsx-ldtrk'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xfd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='SierraForest-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ifma'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-ne-convert'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx-vnni-int8'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='bus-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cmpccxadd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fbsdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='fsrs'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ibrs-all'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mcdt-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pbrsb-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='psdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='sbdr-ssdp-no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='serialize'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vaes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='vpclmulqdq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Client-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='hle'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='rtm'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Skylake-Server-v5'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512bw'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512cd'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512dq'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512f'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='avx512vl'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='invpcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pcid'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='pku'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='mpx'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v2'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v3'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='core-capability'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='split-lock-detect'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='Snowridge-v4'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='cldemote'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='erms'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='gfni'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdir64b'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='movdiri'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='xsaves'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='athlon-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='core2duo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='coreduo-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='n270-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='ss'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <blockers model='phenom-v1'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnow'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <feature name='3dnowext'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </blockers>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </mode>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </cpu>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <memoryBacking supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <enum name='sourceType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>anonymous</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <value>memfd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </memoryBacking>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <disk supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='diskDevice'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>disk</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cdrom</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>floppy</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>lun</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ide</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>fdc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>sata</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <graphics supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vnc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egl-headless</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </graphics>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <video supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='modelType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vga</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>cirrus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>none</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>bochs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ramfb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </video>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hostdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='mode'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>subsystem</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='startupPolicy'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>mandatory</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>requisite</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>optional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='subsysType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pci</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>scsi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='capsType'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='pciBackend'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hostdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <rng supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtio-non-transitional</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>random</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>egd</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </rng>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <filesystem supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='driverType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>path</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>handle</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>virtiofs</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </filesystem>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <tpm supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-tis</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tpm-crb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emulator</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>external</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendVersion'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>2.0</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </tpm>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <redirdev supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='bus'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>usb</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </redirdev>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <channel supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </channel>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <crypto supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendModel'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>builtin</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </crypto>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <interface supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='backendType'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>default</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>passt</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </interface>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <panic supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='model'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>isa</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>hyperv</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </panic>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <console supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='type'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>null</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vc</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pty</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dev</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>file</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>pipe</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stdio</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>udp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tcp</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>unix</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>qemu-vdagent</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>dbus</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </console>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </devices>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   <features>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <gic supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <vmcoreinfo supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <genid supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backingStoreInput supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <backup supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <async-teardown supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <ps2 supported='yes'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sev supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <sgx supported='no'/>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <hyperv supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='features'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>relaxed</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vapic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>spinlocks</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vpindex</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>runtime</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>synic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>stimer</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reset</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>vendor_id</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>frequencies</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>reenlightenment</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tlbflush</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>ipi</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>avic</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>emsr_bitmap</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>xmm_input</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <spinlocks>4095</spinlocks>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <stimer_direct>on</stimer_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_direct>on</tlbflush_direct>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <tlbflush_extended>on</tlbflush_extended>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </defaults>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </hyperv>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     <launchSecurity supported='yes'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       <enum name='sectype'>
Dec 01 20:48:52 compute-0 nova_compute[244568]:         <value>tdx</value>
Dec 01 20:48:52 compute-0 nova_compute[244568]:       </enum>
Dec 01 20:48:52 compute-0 nova_compute[244568]:     </launchSecurity>
Dec 01 20:48:52 compute-0 nova_compute[244568]:   </features>
Dec 01 20:48:52 compute-0 nova_compute[244568]: </domainCapabilities>
Dec 01 20:48:52 compute-0 nova_compute[244568]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.933 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.934 244572 INFO nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Secure Boot support detected
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.935 244572 INFO nova.virt.libvirt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.936 244572 INFO nova.virt.libvirt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.946 244572 DEBUG nova.virt.libvirt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 01 20:48:52 compute-0 nova_compute[244568]: 2025-12-01 20:48:52.982 244572 INFO nova.virt.node [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Determined node identity 1adb778b-ac5d-48bb-abc3-c422b12ca516 from /var/lib/nova/compute_id
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.007 244572 WARNING nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Compute nodes ['1adb778b-ac5d-48bb-abc3-c422b12ca516'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.053 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 01 20:48:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.105 244572 WARNING nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.105 244572 DEBUG oslo_concurrency.lockutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.106 244572 DEBUG oslo_concurrency.lockutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.106 244572 DEBUG oslo_concurrency.lockutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.106 244572 DEBUG nova.compute.resource_tracker [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.106 244572 DEBUG oslo_concurrency.processutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:48:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:48:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309003651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.638 244572 DEBUG oslo_concurrency.processutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:48:53 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 01 20:48:53 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 01 20:48:53 compute-0 ceph-mon[75880]: pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:53 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2309003651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.971 244572 WARNING nova.virt.libvirt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.972 244572 DEBUG nova.compute.resource_tracker [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5270MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.972 244572 DEBUG oslo_concurrency.lockutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:48:53 compute-0 nova_compute[244568]: 2025-12-01 20:48:53.972 244572 DEBUG oslo_concurrency.lockutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:48:54 compute-0 nova_compute[244568]: 2025-12-01 20:48:54.087 244572 WARNING nova.compute.resource_tracker [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] No compute node record for compute-0.ctlplane.example.com:1adb778b-ac5d-48bb-abc3-c422b12ca516: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1adb778b-ac5d-48bb-abc3-c422b12ca516 could not be found.
Dec 01 20:48:54 compute-0 nova_compute[244568]: 2025-12-01 20:48:54.175 244572 INFO nova.compute.resource_tracker [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1adb778b-ac5d-48bb-abc3-c422b12ca516
Dec 01 20:48:54 compute-0 nova_compute[244568]: 2025-12-01 20:48:54.455 244572 DEBUG nova.compute.resource_tracker [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:48:54 compute-0 nova_compute[244568]: 2025-12-01 20:48:54.456 244572 DEBUG nova.compute.resource_tracker [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:48:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:55 compute-0 nova_compute[244568]: 2025-12-01 20:48:55.491 244572 INFO nova.scheduler.client.report [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [req-74d55f60-f60f-4402-86dc-04fb91d31a7b] Created resource provider record via placement API for resource provider with UUID 1adb778b-ac5d-48bb-abc3-c422b12ca516 and name compute-0.ctlplane.example.com.
Dec 01 20:48:55 compute-0 ceph-mon[75880]: pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:55 compute-0 nova_compute[244568]: 2025-12-01 20:48:55.901 244572 DEBUG oslo_concurrency.processutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:48:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:48:56 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385164995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.391 244572 DEBUG oslo_concurrency.processutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.395 244572 DEBUG nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 01 20:48:56 compute-0 nova_compute[244568]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.396 244572 INFO nova.virt.libvirt.host [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] kernel doesn't support AMD SEV
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.396 244572 DEBUG nova.compute.provider_tree [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Updating inventory in ProviderTree for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.397 244572 DEBUG nova.virt.libvirt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.741 244572 DEBUG nova.scheduler.client.report [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Updated inventory for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.742 244572 DEBUG nova.compute.provider_tree [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Updating resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.742 244572 DEBUG nova.compute.provider_tree [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Updating inventory in ProviderTree for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 01 20:48:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:56 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/385164995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.852 244572 DEBUG nova.compute.provider_tree [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Updating resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.877 244572 DEBUG nova.compute.resource_tracker [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.877 244572 DEBUG oslo_concurrency.lockutils [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.877 244572 DEBUG nova.service [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.954 244572 DEBUG nova.service [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 01 20:48:56 compute-0 nova_compute[244568]: 2025-12-01 20:48:56.955 244572 DEBUG nova.servicegroup.drivers.db [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 01 20:48:57 compute-0 ceph-mon[75880]: pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:48:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:48:59 compute-0 ceph-mon[75880]: pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:01 compute-0 ceph-mon[75880]: pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:49:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:49:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:49:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:49:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:49:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:49:03 compute-0 ceph-mon[75880]: pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:04 compute-0 nova_compute[244568]: 2025-12-01 20:49:04.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:04 compute-0 nova_compute[244568]: 2025-12-01 20:49:04.986 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:05 compute-0 ceph-mon[75880]: pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:07 compute-0 ceph-mon[75880]: pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:09 compute-0 ceph-mon[75880]: pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:11 compute-0 ceph-mon[75880]: pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:13 compute-0 ceph-mon[75880]: pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:15 compute-0 podman[244936]: 2025-12-01 20:49:15.141156807 +0000 UTC m=+0.100031768 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 01 20:49:15 compute-0 ceph-mon[75880]: pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:49:16 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/871315020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:49:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:49:16 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/871315020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:49:16 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/871315020' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:49:16 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/871315020' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:49:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4083075246' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:49:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4083075246' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:49:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/679379263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:49:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/679379263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:17 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/4083075246' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/4083075246' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/679379263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:49:17 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/679379263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:49:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:18 compute-0 podman[244956]: 2025-12-01 20:49:18.084387694 +0000 UTC m=+0.053320839 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:49:18 compute-0 podman[244957]: 2025-12-01 20:49:18.109569434 +0000 UTC m=+0.074640071 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Dec 01 20:49:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:19 compute-0 ceph-mon[75880]: pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:22 compute-0 ceph-mon[75880]: pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:24 compute-0 ceph-mon[75880]: pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:26 compute-0 ceph-mon[75880]: pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:28 compute-0 ceph-mon[75880]: pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:30 compute-0 ceph-mon[75880]: pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:31 compute-0 ceph-mon[75880]: pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:49:32
Dec 01 20:49:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:49:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:49:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images']
Dec 01 20:49:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:49:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:49:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:49:33 compute-0 ceph-mon[75880]: pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:35 compute-0 ceph-mon[75880]: pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:37 compute-0 ceph-mon[75880]: pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:39 compute-0 ceph-mon[75880]: pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:49:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:41 compute-0 ceph-mon[75880]: pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:43 compute-0 ceph-mon[75880]: pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:49:44.350 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:49:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:49:44.350 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:49:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:49:44.350 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:49:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:45 compute-0 ceph-mon[75880]: pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:46 compute-0 podman[245001]: 2025-12-01 20:49:46.088090302 +0000 UTC m=+0.051125857 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:49:46 compute-0 sudo[245020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:49:46 compute-0 sudo[245020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:46 compute-0 sudo[245020]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:46 compute-0 sudo[245045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:49:46 compute-0 sudo[245045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:46 compute-0 sudo[245045]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:49:46 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:49:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:49:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:49:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:49:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:49:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:49:46 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:49:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:49:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:49:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:49:46 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:49:46 compute-0 sudo[245101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:49:46 compute-0 sudo[245101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:46 compute-0 sudo[245101]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:47 compute-0 sudo[245126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:49:47 compute-0 sudo[245126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:47 compute-0 podman[245164]: 2025-12-01 20:49:47.285994082 +0000 UTC m=+0.037880738 container create 0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noyce, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 01 20:49:47 compute-0 systemd[1]: Started libpod-conmon-0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98.scope.
Dec 01 20:49:47 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:49:47 compute-0 podman[245164]: 2025-12-01 20:49:47.357872283 +0000 UTC m=+0.109758959 container init 0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:49:47 compute-0 podman[245164]: 2025-12-01 20:49:47.268301882 +0000 UTC m=+0.020188568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:49:47 compute-0 podman[245164]: 2025-12-01 20:49:47.365014924 +0000 UTC m=+0.116901580 container start 0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:49:47 compute-0 podman[245164]: 2025-12-01 20:49:47.368374249 +0000 UTC m=+0.120260905 container attach 0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:49:47 compute-0 sad_noyce[245180]: 167 167
Dec 01 20:49:47 compute-0 systemd[1]: libpod-0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98.scope: Deactivated successfully.
Dec 01 20:49:47 compute-0 podman[245164]: 2025-12-01 20:49:47.370503095 +0000 UTC m=+0.122389771 container died 0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noyce, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c983d4bc44107e872007bcaee50e5642375ee0a79c2c8ad5cd6e6cda78befb-merged.mount: Deactivated successfully.
Dec 01 20:49:47 compute-0 podman[245164]: 2025-12-01 20:49:47.412251281 +0000 UTC m=+0.164137957 container remove 0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_noyce, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:49:47 compute-0 systemd[1]: libpod-conmon-0e1de46b9c1e4afbdd5884759197a648f08445f80f872b14d16863cfce3c1b98.scope: Deactivated successfully.
Dec 01 20:49:47 compute-0 podman[245204]: 2025-12-01 20:49:47.592979502 +0000 UTC m=+0.049791177 container create 53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_diffie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 01 20:49:47 compute-0 systemd[1]: Started libpod-conmon-53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6.scope.
Dec 01 20:49:47 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbaf47daff09edda68c92e657d23479909786cff37f57e4543f6e4e7fe9e32a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbaf47daff09edda68c92e657d23479909786cff37f57e4543f6e4e7fe9e32a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbaf47daff09edda68c92e657d23479909786cff37f57e4543f6e4e7fe9e32a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbaf47daff09edda68c92e657d23479909786cff37f57e4543f6e4e7fe9e32a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbbaf47daff09edda68c92e657d23479909786cff37f57e4543f6e4e7fe9e32a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:47 compute-0 podman[245204]: 2025-12-01 20:49:47.664217573 +0000 UTC m=+0.121029248 container init 53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:49:47 compute-0 podman[245204]: 2025-12-01 20:49:47.574104546 +0000 UTC m=+0.030916241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:49:47 compute-0 podman[245204]: 2025-12-01 20:49:47.673799581 +0000 UTC m=+0.130611246 container start 53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:49:47 compute-0 podman[245204]: 2025-12-01 20:49:47.676861926 +0000 UTC m=+0.133673621 container attach 53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:49:47 compute-0 ceph-mon[75880]: pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:49:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:49:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:49:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:49:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:49:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:49:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:48 compute-0 distracted_diffie[245221]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:49:48 compute-0 distracted_diffie[245221]: --> All data devices are unavailable
Dec 01 20:49:48 compute-0 systemd[1]: libpod-53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6.scope: Deactivated successfully.
Dec 01 20:49:48 compute-0 podman[245204]: 2025-12-01 20:49:48.123394758 +0000 UTC m=+0.580206413 container died 53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_diffie, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:49:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbbaf47daff09edda68c92e657d23479909786cff37f57e4543f6e4e7fe9e32a-merged.mount: Deactivated successfully.
Dec 01 20:49:48 compute-0 podman[245204]: 2025-12-01 20:49:48.170085678 +0000 UTC m=+0.626897343 container remove 53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:49:48 compute-0 systemd[1]: libpod-conmon-53e36a8f787a745c5576700a0a1cae97e98a6c3bd914c2c4cdc93894e46892b6.scope: Deactivated successfully.
Dec 01 20:49:48 compute-0 podman[245241]: 2025-12-01 20:49:48.214017522 +0000 UTC m=+0.058753575 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 01 20:49:48 compute-0 sudo[245126]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:48 compute-0 podman[245249]: 2025-12-01 20:49:48.241009619 +0000 UTC m=+0.086105134 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:49:48 compute-0 sudo[245294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:49:48 compute-0 sudo[245294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:48 compute-0 sudo[245294]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:48 compute-0 sudo[245321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:49:48 compute-0 sudo[245321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:48 compute-0 podman[245356]: 2025-12-01 20:49:48.578846618 +0000 UTC m=+0.039619531 container create d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Dec 01 20:49:48 compute-0 systemd[1]: Started libpod-conmon-d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576.scope.
Dec 01 20:49:48 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:49:48 compute-0 podman[245356]: 2025-12-01 20:49:48.650050428 +0000 UTC m=+0.110823431 container init d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:49:48 compute-0 podman[245356]: 2025-12-01 20:49:48.657470059 +0000 UTC m=+0.118243002 container start d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:49:48 compute-0 podman[245356]: 2025-12-01 20:49:48.563616945 +0000 UTC m=+0.024389868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:49:48 compute-0 podman[245356]: 2025-12-01 20:49:48.661475453 +0000 UTC m=+0.122248406 container attach d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:49:48 compute-0 kind_ptolemy[245373]: 167 167
Dec 01 20:49:48 compute-0 systemd[1]: libpod-d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576.scope: Deactivated successfully.
Dec 01 20:49:48 compute-0 podman[245356]: 2025-12-01 20:49:48.664011381 +0000 UTC m=+0.124784304 container died d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:49:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-19aee5e3742f3ab0924e0dd8cc522fe8061262968367d7d7c4d7cb05ac900c01-merged.mount: Deactivated successfully.
Dec 01 20:49:48 compute-0 podman[245356]: 2025-12-01 20:49:48.702479956 +0000 UTC m=+0.163252849 container remove d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_ptolemy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:49:48 compute-0 systemd[1]: libpod-conmon-d47cabe2825d1cb51a4e217d0075e6dc3aed025aa8d91f5411e80d76a8930576.scope: Deactivated successfully.
Dec 01 20:49:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:48 compute-0 podman[245397]: 2025-12-01 20:49:48.866695064 +0000 UTC m=+0.038829396 container create 95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:49:48 compute-0 systemd[1]: Started libpod-conmon-95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b.scope.
Dec 01 20:49:48 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7f841fa6bfc7971fd3998aff6aba8912e2afbe2dfb6ffdbebb0768d1d705b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7f841fa6bfc7971fd3998aff6aba8912e2afbe2dfb6ffdbebb0768d1d705b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7f841fa6bfc7971fd3998aff6aba8912e2afbe2dfb6ffdbebb0768d1d705b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c7f841fa6bfc7971fd3998aff6aba8912e2afbe2dfb6ffdbebb0768d1d705b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:48 compute-0 podman[245397]: 2025-12-01 20:49:48.936559953 +0000 UTC m=+0.108694315 container init 95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamarr, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:49:48 compute-0 podman[245397]: 2025-12-01 20:49:48.941779935 +0000 UTC m=+0.113914267 container start 95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 01 20:49:48 compute-0 podman[245397]: 2025-12-01 20:49:48.944391746 +0000 UTC m=+0.116526078 container attach 95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamarr, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:49:48 compute-0 podman[245397]: 2025-12-01 20:49:48.851475321 +0000 UTC m=+0.023609673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]: {
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:     "0": [
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:         {
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "devices": [
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "/dev/loop3"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             ],
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_name": "ceph_lv0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_size": "21470642176",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "name": "ceph_lv0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "tags": {
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cluster_name": "ceph",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.crush_device_class": "",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.encrypted": "0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.objectstore": "bluestore",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osd_id": "0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.type": "block",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.vdo": "0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.with_tpm": "0"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             },
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "type": "block",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "vg_name": "ceph_vg0"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:         }
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:     ],
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:     "1": [
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:         {
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "devices": [
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "/dev/loop4"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             ],
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_name": "ceph_lv1",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_size": "21470642176",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "name": "ceph_lv1",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "tags": {
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cluster_name": "ceph",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.crush_device_class": "",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.encrypted": "0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.objectstore": "bluestore",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osd_id": "1",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.type": "block",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.vdo": "0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.with_tpm": "0"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             },
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "type": "block",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "vg_name": "ceph_vg1"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:         }
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:     ],
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:     "2": [
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:         {
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "devices": [
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "/dev/loop5"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             ],
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_name": "ceph_lv2",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_size": "21470642176",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "name": "ceph_lv2",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "tags": {
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.cluster_name": "ceph",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.crush_device_class": "",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.encrypted": "0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.objectstore": "bluestore",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osd_id": "2",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.type": "block",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.vdo": "0",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:                 "ceph.with_tpm": "0"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             },
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "type": "block",
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:             "vg_name": "ceph_vg2"
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:         }
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]:     ]
Dec 01 20:49:49 compute-0 dazzling_lamarr[245414]: }
Dec 01 20:49:49 compute-0 systemd[1]: libpod-95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b.scope: Deactivated successfully.
Dec 01 20:49:49 compute-0 podman[245397]: 2025-12-01 20:49:49.231772457 +0000 UTC m=+0.403906789 container died 95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamarr, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c7f841fa6bfc7971fd3998aff6aba8912e2afbe2dfb6ffdbebb0768d1d705b2-merged.mount: Deactivated successfully.
Dec 01 20:49:49 compute-0 podman[245397]: 2025-12-01 20:49:49.268519228 +0000 UTC m=+0.440653560 container remove 95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:49:49 compute-0 systemd[1]: libpod-conmon-95979d773db31c17ab5d917b389e5c4876446f9b726fb832a0d21a9a2b512b1b.scope: Deactivated successfully.
Dec 01 20:49:49 compute-0 sudo[245321]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:49 compute-0 sudo[245435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:49:49 compute-0 sudo[245435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:49 compute-0 sudo[245435]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:49 compute-0 sudo[245460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:49:49 compute-0 sudo[245460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:49 compute-0 podman[245497]: 2025-12-01 20:49:49.6866662 +0000 UTC m=+0.043334347 container create 444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:49:49 compute-0 systemd[1]: Started libpod-conmon-444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4.scope.
Dec 01 20:49:49 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:49:49 compute-0 podman[245497]: 2025-12-01 20:49:49.759247163 +0000 UTC m=+0.115915330 container init 444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_solomon, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:49:49 compute-0 podman[245497]: 2025-12-01 20:49:49.666778692 +0000 UTC m=+0.023446889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:49:49 compute-0 podman[245497]: 2025-12-01 20:49:49.766310483 +0000 UTC m=+0.122978630 container start 444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 01 20:49:49 compute-0 podman[245497]: 2025-12-01 20:49:49.76946802 +0000 UTC m=+0.126136187 container attach 444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_solomon, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 01 20:49:49 compute-0 strange_solomon[245513]: 167 167
Dec 01 20:49:49 compute-0 systemd[1]: libpod-444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4.scope: Deactivated successfully.
Dec 01 20:49:49 compute-0 podman[245497]: 2025-12-01 20:49:49.77075094 +0000 UTC m=+0.127419097 container died 444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-326d3fffc6dd41f45ae8ba6ba80348e8d0792859b20a1db40c0a329e4a922fdf-merged.mount: Deactivated successfully.
Dec 01 20:49:49 compute-0 podman[245497]: 2025-12-01 20:49:49.804263141 +0000 UTC m=+0.160931288 container remove 444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_solomon, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:49:49 compute-0 systemd[1]: libpod-conmon-444a67ec080370731dcaca617a8418896ed4785c2af88120fb20016f883ce9d4.scope: Deactivated successfully.
Dec 01 20:49:49 compute-0 ceph-mon[75880]: pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:49 compute-0 podman[245535]: 2025-12-01 20:49:49.985306331 +0000 UTC m=+0.040411755 container create 854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chaplygin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:49:50 compute-0 systemd[1]: Started libpod-conmon-854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b.scope.
Dec 01 20:49:50 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d566955d3e080c905dbbe2418007727a277376a394af12b9200abe0084d9ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d566955d3e080c905dbbe2418007727a277376a394af12b9200abe0084d9ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d566955d3e080c905dbbe2418007727a277376a394af12b9200abe0084d9ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d566955d3e080c905dbbe2418007727a277376a394af12b9200abe0084d9ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:49:50 compute-0 podman[245535]: 2025-12-01 20:49:50.058274256 +0000 UTC m=+0.113379670 container init 854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:49:50 compute-0 podman[245535]: 2025-12-01 20:49:49.965908799 +0000 UTC m=+0.021014283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:49:50 compute-0 podman[245535]: 2025-12-01 20:49:50.06547224 +0000 UTC m=+0.120577664 container start 854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chaplygin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:49:50 compute-0 podman[245535]: 2025-12-01 20:49:50.068438082 +0000 UTC m=+0.123543506 container attach 854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:49:50 compute-0 lvm[245631]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:49:50 compute-0 lvm[245630]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:49:50 compute-0 lvm[245631]: VG ceph_vg1 finished
Dec 01 20:49:50 compute-0 lvm[245630]: VG ceph_vg0 finished
Dec 01 20:49:50 compute-0 lvm[245633]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:49:50 compute-0 lvm[245633]: VG ceph_vg2 finished
Dec 01 20:49:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:50 compute-0 crazy_chaplygin[245552]: {}
Dec 01 20:49:50 compute-0 systemd[1]: libpod-854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b.scope: Deactivated successfully.
Dec 01 20:49:50 compute-0 systemd[1]: libpod-854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b.scope: Consumed 1.204s CPU time.
Dec 01 20:49:50 compute-0 podman[245535]: 2025-12-01 20:49:50.842442081 +0000 UTC m=+0.897547505 container died 854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chaplygin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0d566955d3e080c905dbbe2418007727a277376a394af12b9200abe0084d9ff-merged.mount: Deactivated successfully.
Dec 01 20:49:50 compute-0 podman[245535]: 2025-12-01 20:49:50.882965019 +0000 UTC m=+0.938070443 container remove 854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:49:50 compute-0 systemd[1]: libpod-conmon-854d78779fe0237fd6b6346730c30b871a94e59f6d8286e00d7a732b69e3163b.scope: Deactivated successfully.
Dec 01 20:49:50 compute-0 sudo[245460]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:49:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:49:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:49:50 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:49:50 compute-0 sudo[245647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:49:50 compute-0 sudo[245647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:49:50 compute-0 sudo[245647]: pam_unix(sudo:session): session closed for user root
Dec 01 20:49:51 compute-0 ceph-mon[75880]: pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:49:51 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:49:51 compute-0 nova_compute[244568]: 2025-12-01 20:49:51.959 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:51 compute-0 nova_compute[244568]: 2025-12-01 20:49:51.960 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:51 compute-0 nova_compute[244568]: 2025-12-01 20:49:51.960 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:49:51 compute-0 nova_compute[244568]: 2025-12-01 20:49:51.960 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.001 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.002 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.002 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.002 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.003 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.003 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.003 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.003 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.004 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.045 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.045 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.045 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.045 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.046 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:49:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:49:52 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339155892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.572 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:49:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.845 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.846 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5296MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.847 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:49:52 compute-0 nova_compute[244568]: 2025-12-01 20:49:52.847 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:49:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3339155892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:49:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:53 compute-0 ceph-mon[75880]: pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 01 20:49:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301689167' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 01 20:49:55 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14318 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 01 20:49:55 compute-0 ceph-mgr[76174]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 01 20:49:55 compute-0 ceph-mgr[76174]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 01 20:49:56 compute-0 nova_compute[244568]: 2025-12-01 20:49:56.611 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:49:56 compute-0 nova_compute[244568]: 2025-12-01 20:49:56.611 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:49:56 compute-0 nova_compute[244568]: 2025-12-01 20:49:56.630 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:49:56 compute-0 ceph-mon[75880]: pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:56 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3301689167' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 01 20:49:56 compute-0 ceph-mon[75880]: from='client.14318 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 01 20:49:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:49:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296504323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:49:57 compute-0 nova_compute[244568]: 2025-12-01 20:49:57.201 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:49:57 compute-0 nova_compute[244568]: 2025-12-01 20:49:57.207 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:49:57 compute-0 nova_compute[244568]: 2025-12-01 20:49:57.231 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:49:57 compute-0 nova_compute[244568]: 2025-12-01 20:49:57.271 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:49:57 compute-0 nova_compute[244568]: 2025-12-01 20:49:57.272 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:49:57 compute-0 ceph-mon[75880]: pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:57 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/296504323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:49:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:49:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:49:59 compute-0 ceph-mon[75880]: pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:01 compute-0 ceph-mon[75880]: pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:50:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:50:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:50:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:50:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:50:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:50:03 compute-0 ceph-mon[75880]: pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:05 compute-0 ceph-mon[75880]: pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:07 compute-0 ceph-mon[75880]: pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:10 compute-0 ceph-mon[75880]: pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 01 20:50:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468640' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 01 20:50:11 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 01 20:50:11 compute-0 ceph-mgr[76174]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 01 20:50:11 compute-0 ceph-mgr[76174]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 01 20:50:12 compute-0 ceph-mon[75880]: pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:12 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3468640' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 01 20:50:12 compute-0 ceph-mon[75880]: from='client.14322 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 01 20:50:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:14 compute-0 ceph-mon[75880]: pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:16 compute-0 ceph-mon[75880]: pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:17 compute-0 podman[245716]: 2025-12-01 20:50:17.141909857 +0000 UTC m=+0.086996032 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 01 20:50:17 compute-0 ceph-mon[75880]: pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:19 compute-0 podman[245736]: 2025-12-01 20:50:19.08403168 +0000 UTC m=+0.050324993 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 01 20:50:19 compute-0 podman[245737]: 2025-12-01 20:50:19.126944812 +0000 UTC m=+0.087564759 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:50:19 compute-0 ceph-mon[75880]: pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:21 compute-0 ceph-mon[75880]: pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:23 compute-0 ceph-mon[75880]: pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:25 compute-0 ceph-mon[75880]: pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:27 compute-0 ceph-mon[75880]: pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:29 compute-0 ceph-mon[75880]: pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:32 compute-0 ceph-mon[75880]: pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:50:32
Dec 01 20:50:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:50:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:50:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', '.mgr']
Dec 01 20:50:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:50:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:50:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:50:34 compute-0 ceph-mon[75880]: pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:36 compute-0 ceph-mon[75880]: pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:38 compute-0 ceph-mon[75880]: pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:50:40 compute-0 ceph-mon[75880]: pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:41 compute-0 ceph-mon[75880]: pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:43 compute-0 ceph-mon[75880]: pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:50:44.352 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:50:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:50:44.352 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:50:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:50:44.352 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:50:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:46 compute-0 ceph-mon[75880]: pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:48 compute-0 podman[245782]: 2025-12-01 20:50:48.081961148 +0000 UTC m=+0.050631813 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 01 20:50:48 compute-0 ceph-mon[75880]: pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:50 compute-0 podman[245802]: 2025-12-01 20:50:50.081939937 +0000 UTC m=+0.048523758 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 01 20:50:50 compute-0 podman[245803]: 2025-12-01 20:50:50.134254031 +0000 UTC m=+0.097695564 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 01 20:50:50 compute-0 ceph-mon[75880]: pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:51 compute-0 sudo[245847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:50:51 compute-0 sudo[245847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:51 compute-0 sudo[245847]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:51 compute-0 sudo[245872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:50:51 compute-0 sudo[245872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:51 compute-0 sudo[245872]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:50:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:50:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:50:51 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:50:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:50:51 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:50:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:50:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:50:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:50:51 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:50:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:50:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:50:51 compute-0 sudo[245929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:50:51 compute-0 sudo[245929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:51 compute-0 sudo[245929]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:51 compute-0 sudo[245954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:50:51 compute-0 sudo[245954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:52 compute-0 podman[245991]: 2025-12-01 20:50:52.156112629 +0000 UTC m=+0.052824850 container create 3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:50:52 compute-0 systemd[1]: Started libpod-conmon-3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5.scope.
Dec 01 20:50:52 compute-0 podman[245991]: 2025-12-01 20:50:52.128516963 +0000 UTC m=+0.025229214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:50:52 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:50:52 compute-0 podman[245991]: 2025-12-01 20:50:52.245410462 +0000 UTC m=+0.142122743 container init 3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:50:52 compute-0 podman[245991]: 2025-12-01 20:50:52.252281865 +0000 UTC m=+0.148994086 container start 3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:50:52 compute-0 podman[245991]: 2025-12-01 20:50:52.256566658 +0000 UTC m=+0.153278869 container attach 3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:50:52 compute-0 practical_ride[246008]: 167 167
Dec 01 20:50:52 compute-0 systemd[1]: libpod-3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5.scope: Deactivated successfully.
Dec 01 20:50:52 compute-0 podman[245991]: 2025-12-01 20:50:52.258704604 +0000 UTC m=+0.155416825 container died 3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 01 20:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b17fcb62d415bbcbb6a1d870be040c6ca749317f3652c9e38554669a13f525e-merged.mount: Deactivated successfully.
Dec 01 20:50:52 compute-0 podman[245991]: 2025-12-01 20:50:52.304005561 +0000 UTC m=+0.200717742 container remove 3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:50:52 compute-0 systemd[1]: libpod-conmon-3363cdd26279e9489664f40eb2c9b6f4f933ed9de7aebfb545f490dd7b32eeb5.scope: Deactivated successfully.
Dec 01 20:50:52 compute-0 ceph-mon[75880]: pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:52 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:50:52 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:50:52 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:50:52 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:50:52 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:50:52 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:50:52 compute-0 podman[246031]: 2025-12-01 20:50:52.473326958 +0000 UTC m=+0.038855558 container create d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Dec 01 20:50:52 compute-0 systemd[1]: Started libpod-conmon-d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08.scope.
Dec 01 20:50:52 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfb70fbae4eda7f9d13646a4a69e2a47b6d0fdeea731f83f16d74f0ed839f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfb70fbae4eda7f9d13646a4a69e2a47b6d0fdeea731f83f16d74f0ed839f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfb70fbae4eda7f9d13646a4a69e2a47b6d0fdeea731f83f16d74f0ed839f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfb70fbae4eda7f9d13646a4a69e2a47b6d0fdeea731f83f16d74f0ed839f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfb70fbae4eda7f9d13646a4a69e2a47b6d0fdeea731f83f16d74f0ed839f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:52 compute-0 podman[246031]: 2025-12-01 20:50:52.550671928 +0000 UTC m=+0.116200528 container init d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:50:52 compute-0 podman[246031]: 2025-12-01 20:50:52.457890118 +0000 UTC m=+0.023418738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:50:52 compute-0 podman[246031]: 2025-12-01 20:50:52.55746323 +0000 UTC m=+0.122991840 container start d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lovelace, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 01 20:50:52 compute-0 podman[246031]: 2025-12-01 20:50:52.560503084 +0000 UTC m=+0.126031684 container attach d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lovelace, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 01 20:50:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:52 compute-0 xenodochial_lovelace[246048]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:50:52 compute-0 xenodochial_lovelace[246048]: --> All data devices are unavailable
Dec 01 20:50:53 compute-0 systemd[1]: libpod-d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08.scope: Deactivated successfully.
Dec 01 20:50:53 compute-0 podman[246031]: 2025-12-01 20:50:53.018704349 +0000 UTC m=+0.584232949 container died d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lovelace, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbdfb70fbae4eda7f9d13646a4a69e2a47b6d0fdeea731f83f16d74f0ed839f4-merged.mount: Deactivated successfully.
Dec 01 20:50:53 compute-0 podman[246031]: 2025-12-01 20:50:53.061478277 +0000 UTC m=+0.627006897 container remove d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:50:53 compute-0 systemd[1]: libpod-conmon-d831f22e377dc6173e2dfd41ff9544f6e8cca5bf8010319bf6a34f5dfaeb6d08.scope: Deactivated successfully.
Dec 01 20:50:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:53 compute-0 sudo[245954]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:53 compute-0 sudo[246079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:50:53 compute-0 sudo[246079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:53 compute-0 sudo[246079]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:53 compute-0 sudo[246104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:50:53 compute-0 sudo[246104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:53 compute-0 ceph-mon[75880]: pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:53 compute-0 podman[246142]: 2025-12-01 20:50:53.526927916 +0000 UTC m=+0.039098495 container create cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lehmann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:50:53 compute-0 systemd[1]: Started libpod-conmon-cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88.scope.
Dec 01 20:50:53 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:50:53 compute-0 podman[246142]: 2025-12-01 20:50:53.509249048 +0000 UTC m=+0.021419677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:50:53 compute-0 podman[246142]: 2025-12-01 20:50:53.61240632 +0000 UTC m=+0.124576959 container init cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lehmann, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:50:53 compute-0 podman[246142]: 2025-12-01 20:50:53.619291773 +0000 UTC m=+0.131462352 container start cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lehmann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:50:53 compute-0 podman[246142]: 2025-12-01 20:50:53.622451181 +0000 UTC m=+0.134621860 container attach cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:50:53 compute-0 serene_lehmann[246158]: 167 167
Dec 01 20:50:53 compute-0 systemd[1]: libpod-cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88.scope: Deactivated successfully.
Dec 01 20:50:53 compute-0 podman[246142]: 2025-12-01 20:50:53.625107824 +0000 UTC m=+0.137278413 container died cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lehmann, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-579739d571afa77e7a814cbda50d39c27a4f4a024d5f9181cb94e88bb5735127-merged.mount: Deactivated successfully.
Dec 01 20:50:53 compute-0 podman[246142]: 2025-12-01 20:50:53.855330081 +0000 UTC m=+0.367500660 container remove cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_lehmann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 20:50:53 compute-0 systemd[1]: libpod-conmon-cd7711ca05e10ba978c83a396625375d1a8c05e6ac79e2d9742206f5e7c56b88.scope: Deactivated successfully.
Dec 01 20:50:54 compute-0 podman[246184]: 2025-12-01 20:50:54.060097979 +0000 UTC m=+0.067951781 container create 0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:50:54 compute-0 systemd[1]: Started libpod-conmon-0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b.scope.
Dec 01 20:50:54 compute-0 podman[246184]: 2025-12-01 20:50:54.034305358 +0000 UTC m=+0.042159220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:50:54 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97747741f27d579cf2a8190c87a4b26eb7524d63edaa72e83dfbc85280fa03b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97747741f27d579cf2a8190c87a4b26eb7524d63edaa72e83dfbc85280fa03b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97747741f27d579cf2a8190c87a4b26eb7524d63edaa72e83dfbc85280fa03b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97747741f27d579cf2a8190c87a4b26eb7524d63edaa72e83dfbc85280fa03b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:54 compute-0 podman[246184]: 2025-12-01 20:50:54.158829174 +0000 UTC m=+0.166682986 container init 0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:50:54 compute-0 podman[246184]: 2025-12-01 20:50:54.167555375 +0000 UTC m=+0.175409187 container start 0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 01 20:50:54 compute-0 podman[246184]: 2025-12-01 20:50:54.171980172 +0000 UTC m=+0.179833984 container attach 0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_proskuriakova, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]: {
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:     "0": [
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:         {
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "devices": [
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "/dev/loop3"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             ],
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_name": "ceph_lv0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_size": "21470642176",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "name": "ceph_lv0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "tags": {
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cluster_name": "ceph",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.crush_device_class": "",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.encrypted": "0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.objectstore": "bluestore",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osd_id": "0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.type": "block",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.vdo": "0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.with_tpm": "0"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             },
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "type": "block",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "vg_name": "ceph_vg0"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:         }
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:     ],
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:     "1": [
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:         {
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "devices": [
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "/dev/loop4"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             ],
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_name": "ceph_lv1",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_size": "21470642176",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "name": "ceph_lv1",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "tags": {
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cluster_name": "ceph",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.crush_device_class": "",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.encrypted": "0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.objectstore": "bluestore",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osd_id": "1",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.type": "block",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.vdo": "0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.with_tpm": "0"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             },
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "type": "block",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "vg_name": "ceph_vg1"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:         }
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:     ],
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:     "2": [
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:         {
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "devices": [
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "/dev/loop5"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             ],
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_name": "ceph_lv2",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_size": "21470642176",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "name": "ceph_lv2",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "tags": {
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.cluster_name": "ceph",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.crush_device_class": "",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.encrypted": "0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.objectstore": "bluestore",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osd_id": "2",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.type": "block",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.vdo": "0",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:                 "ceph.with_tpm": "0"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             },
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "type": "block",
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:             "vg_name": "ceph_vg2"
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:         }
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]:     ]
Dec 01 20:50:54 compute-0 crazy_proskuriakova[246201]: }
Dec 01 20:50:54 compute-0 systemd[1]: libpod-0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b.scope: Deactivated successfully.
Dec 01 20:50:54 compute-0 podman[246184]: 2025-12-01 20:50:54.481289975 +0000 UTC m=+0.489143767 container died 0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e97747741f27d579cf2a8190c87a4b26eb7524d63edaa72e83dfbc85280fa03b-merged.mount: Deactivated successfully.
Dec 01 20:50:54 compute-0 podman[246184]: 2025-12-01 20:50:54.554955241 +0000 UTC m=+0.562809053 container remove 0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_proskuriakova, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:50:54 compute-0 systemd[1]: libpod-conmon-0e5fac90c71fab110f024329b9d5e3089c56b1741a40c83285acb6d5d2123c9b.scope: Deactivated successfully.
Dec 01 20:50:54 compute-0 sudo[246104]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:54 compute-0 sudo[246221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:50:54 compute-0 sudo[246221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:54 compute-0 sudo[246221]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:54 compute-0 sudo[246246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:50:54 compute-0 sudo[246246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:55 compute-0 podman[246283]: 2025-12-01 20:50:55.0538659 +0000 UTC m=+0.045821774 container create 0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shirley, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 01 20:50:55 compute-0 systemd[1]: Started libpod-conmon-0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de.scope.
Dec 01 20:50:55 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:50:55 compute-0 podman[246283]: 2025-12-01 20:50:55.031929629 +0000 UTC m=+0.023885563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:50:55 compute-0 podman[246283]: 2025-12-01 20:50:55.139090746 +0000 UTC m=+0.131046830 container init 0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shirley, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:50:55 compute-0 podman[246283]: 2025-12-01 20:50:55.146713412 +0000 UTC m=+0.138669266 container start 0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:50:55 compute-0 podman[246283]: 2025-12-01 20:50:55.150059547 +0000 UTC m=+0.142015411 container attach 0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 20:50:55 compute-0 ecstatic_shirley[246300]: 167 167
Dec 01 20:50:55 compute-0 systemd[1]: libpod-0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de.scope: Deactivated successfully.
Dec 01 20:50:55 compute-0 podman[246283]: 2025-12-01 20:50:55.152928966 +0000 UTC m=+0.144884830 container died 0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shirley, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f357020738661414d0d5541fb737264a63f67530e12807e3ab011bb97b08c0e-merged.mount: Deactivated successfully.
Dec 01 20:50:55 compute-0 podman[246283]: 2025-12-01 20:50:55.182232605 +0000 UTC m=+0.174188459 container remove 0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:50:55 compute-0 systemd[1]: libpod-conmon-0621f1c94efe350293c6542590b63c5a5fcf8a05c4206b6544bdd3057098f4de.scope: Deactivated successfully.
Dec 01 20:50:55 compute-0 podman[246322]: 2025-12-01 20:50:55.386401873 +0000 UTC m=+0.066335560 container create 3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:50:55 compute-0 systemd[1]: Started libpod-conmon-3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43.scope.
Dec 01 20:50:55 compute-0 podman[246322]: 2025-12-01 20:50:55.360334395 +0000 UTC m=+0.040268112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:50:55 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fe5d4483886bd15e87b6ffbb9cbb045014b476dd710ed0409411fcde5c3222/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fe5d4483886bd15e87b6ffbb9cbb045014b476dd710ed0409411fcde5c3222/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fe5d4483886bd15e87b6ffbb9cbb045014b476dd710ed0409411fcde5c3222/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23fe5d4483886bd15e87b6ffbb9cbb045014b476dd710ed0409411fcde5c3222/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:50:55 compute-0 podman[246322]: 2025-12-01 20:50:55.474726176 +0000 UTC m=+0.154659843 container init 3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:50:55 compute-0 podman[246322]: 2025-12-01 20:50:55.480870216 +0000 UTC m=+0.160803863 container start 3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:50:55 compute-0 podman[246322]: 2025-12-01 20:50:55.48454914 +0000 UTC m=+0.164482787 container attach 3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:50:55 compute-0 ceph-mon[75880]: pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:56 compute-0 lvm[246417]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:50:56 compute-0 lvm[246416]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:50:56 compute-0 lvm[246417]: VG ceph_vg1 finished
Dec 01 20:50:56 compute-0 lvm[246416]: VG ceph_vg0 finished
Dec 01 20:50:56 compute-0 lvm[246419]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:50:56 compute-0 lvm[246419]: VG ceph_vg2 finished
Dec 01 20:50:56 compute-0 pensive_snyder[246338]: {}
Dec 01 20:50:56 compute-0 systemd[1]: libpod-3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43.scope: Deactivated successfully.
Dec 01 20:50:56 compute-0 systemd[1]: libpod-3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43.scope: Consumed 1.231s CPU time.
Dec 01 20:50:56 compute-0 podman[246322]: 2025-12-01 20:50:56.233365108 +0000 UTC m=+0.913298835 container died 3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-23fe5d4483886bd15e87b6ffbb9cbb045014b476dd710ed0409411fcde5c3222-merged.mount: Deactivated successfully.
Dec 01 20:50:56 compute-0 podman[246322]: 2025-12-01 20:50:56.274022999 +0000 UTC m=+0.953956646 container remove 3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:50:56 compute-0 systemd[1]: libpod-conmon-3188f0918fca8639a0189affceabddcf6c38240872e2aa9411e189c95ace4b43.scope: Deactivated successfully.
Dec 01 20:50:56 compute-0 sudo[246246]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:50:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:50:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:50:56 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:50:56 compute-0 sudo[246433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:50:56 compute-0 sudo[246433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:50:56 compute-0 sudo[246433]: pam_unix(sudo:session): session closed for user root
Dec 01 20:50:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.264 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.265 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.291 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.291 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.291 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.304 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.305 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.305 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.305 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.306 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.306 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.306 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.306 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.306 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:50:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:50:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.327 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.327 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.327 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.328 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.328 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:50:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:50:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746024660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.825 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.968 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.969 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5289MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.969 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:50:57 compute-0 nova_compute[244568]: 2025-12-01 20:50:57.969 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.036 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.037 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.054 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:50:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:50:58 compute-0 ceph-mon[75880]: pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:58 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1746024660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:50:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:50:58 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999721044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.578 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.583 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.596 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.598 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:50:58 compute-0 nova_compute[244568]: 2025-12-01 20:50:58.598 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:50:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:50:59 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/999721044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:51:00 compute-0 ceph-mon[75880]: pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:02 compute-0 ceph-mon[75880]: pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:51:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137582946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:51:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:51:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137582946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:51:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:51:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:51:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:51:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:51:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:51:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:51:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3137582946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:51:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3137582946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:51:04 compute-0 ceph-mon[75880]: pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:04 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:51:04.456 155855 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:ee:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2e:39:ea:af:48:04'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 01 20:51:04 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:51:04.457 155855 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 01 20:51:04 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:51:04.457 155855 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=84a1d907-d341-4608-b17a-1f738619ea16, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 01 20:51:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:05 compute-0 ceph-mon[75880]: pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:07 compute-0 ceph-mon[75880]: pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:09 compute-0 ceph-mon[75880]: pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:11 compute-0 ceph-mon[75880]: pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:13 compute-0 ceph-mon[75880]: pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:51:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3105 writes, 13K keys, 3105 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 3105 writes, 3105 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1288 writes, 5603 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 5.77 MB, 0.01 MB/s
                                           Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.5      0.13              0.03         6    0.022       0      0       0.0       0.0
                                             L6      1/0    4.66 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     87.7     71.5      0.35              0.08         5    0.069     16K   2269       0.0       0.0
                                            Sum      1/0    4.66 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     63.6     73.4      0.48              0.10        11    0.043     16K   2269       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     94.7     96.6      0.20              0.06         6    0.033     10K   1497       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     87.7     71.5      0.35              0.08         5    0.069     16K   2269       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     80.4      0.13              0.03         5    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.010, interval 0.004
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.03 MB/s read, 0.5 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a3cf2218d0#2 capacity: 308.00 MB usage: 1.49 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(93,1.33 MB,0.432735%) FilterBlock(12,54.30 KB,0.0172157%) IndexBlock(12,109.77 KB,0.0348029%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 01 20:51:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:15 compute-0 ceph-mon[75880]: pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:17 compute-0 ceph-mon[75880]: pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:19 compute-0 podman[246502]: 2025-12-01 20:51:19.151020121 +0000 UTC m=+0.097380035 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:51:19 compute-0 ceph-mon[75880]: pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:21 compute-0 podman[246522]: 2025-12-01 20:51:21.093602329 +0000 UTC m=+0.059434747 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 01 20:51:21 compute-0 podman[246523]: 2025-12-01 20:51:21.158581535 +0000 UTC m=+0.121888304 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 01 20:51:22 compute-0 ceph-mon[75880]: pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:24 compute-0 ceph-mon[75880]: pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:26 compute-0 ceph-mon[75880]: pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:28 compute-0 ceph-mon[75880]: pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:30 compute-0 ceph-mon[75880]: pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:32 compute-0 ceph-mon[75880]: pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:51:32
Dec 01 20:51:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:51:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:51:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'images', 'backups']
Dec 01 20:51:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:51:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:51:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:51:34 compute-0 ceph-mon[75880]: pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:36 compute-0 ceph-mon[75880]: pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:38 compute-0 ceph-mon[75880]: pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:40 compute-0 ceph-mon[75880]: pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3791079507827634e-06 of space, bias 4.0, pg target 0.001654929540939316 quantized to 16 (current 16)
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:51:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:42 compute-0 ceph-mon[75880]: pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:44 compute-0 ceph-mon[75880]: pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:51:44.353 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:51:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:51:44.354 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:51:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:51:44.354 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:51:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:46 compute-0 ceph-mon[75880]: pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:48 compute-0 ceph-mon[75880]: pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:49 compute-0 ceph-mon[75880]: pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:50 compute-0 podman[246567]: 2025-12-01 20:51:50.101315221 +0000 UTC m=+0.057260833 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:51:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:51 compute-0 ceph-mon[75880]: pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:52 compute-0 podman[246588]: 2025-12-01 20:51:52.108740669 +0000 UTC m=+0.065976373 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:51:52 compute-0 podman[246589]: 2025-12-01 20:51:52.147897021 +0000 UTC m=+0.109166481 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 01 20:51:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:53 compute-0 ceph-mon[75880]: pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:54 compute-0 ceph-osd[87692]: bluestore.MempoolThread fragmentation_score=0.000121 took=0.000015s
Dec 01 20:51:54 compute-0 ceph-osd[86634]: bluestore.MempoolThread fragmentation_score=0.000122 took=0.000013s
Dec 01 20:51:54 compute-0 ceph-osd[88745]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000016s
Dec 01 20:51:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:56 compute-0 ceph-mon[75880]: pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:56 compute-0 sudo[246629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:51:56 compute-0 sudo[246629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:51:56 compute-0 sudo[246629]: pam_unix(sudo:session): session closed for user root
Dec 01 20:51:56 compute-0 sudo[246654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:51:56 compute-0 sudo[246654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:51:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:57 compute-0 sudo[246654]: pam_unix(sudo:session): session closed for user root
Dec 01 20:51:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:51:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:51:57 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:51:57 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:51:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:51:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:51:57 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:51:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:51:57 compute-0 sudo[246710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:51:57 compute-0 sudo[246710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:51:57 compute-0 sudo[246710]: pam_unix(sudo:session): session closed for user root
Dec 01 20:51:57 compute-0 sudo[246735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:51:57 compute-0 sudo[246735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:51:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:51:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:51:57 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:51:57 compute-0 podman[246771]: 2025-12-01 20:51:57.580858409 +0000 UTC m=+0.096621522 container create 7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_williamson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:51:57 compute-0 podman[246771]: 2025-12-01 20:51:57.509041836 +0000 UTC m=+0.024804939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:51:57 compute-0 systemd[1]: Started libpod-conmon-7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667.scope.
Dec 01 20:51:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:51:57 compute-0 podman[246771]: 2025-12-01 20:51:57.680050561 +0000 UTC m=+0.195813644 container init 7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_williamson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:51:57 compute-0 podman[246771]: 2025-12-01 20:51:57.687761399 +0000 UTC m=+0.203524492 container start 7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 01 20:51:57 compute-0 objective_williamson[246787]: 167 167
Dec 01 20:51:57 compute-0 systemd[1]: libpod-7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667.scope: Deactivated successfully.
Dec 01 20:51:57 compute-0 conmon[246787]: conmon 7f05ae460998f9fbcc09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667.scope/container/memory.events
Dec 01 20:51:57 compute-0 podman[246771]: 2025-12-01 20:51:57.705021414 +0000 UTC m=+0.220784497 container attach 7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:51:57 compute-0 podman[246771]: 2025-12-01 20:51:57.70649547 +0000 UTC m=+0.222258553 container died 7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 20:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f66280afb200d95e4f4b8d5546723fbe7496b8c571ed83afaf6ad2325b5f9c50-merged.mount: Deactivated successfully.
Dec 01 20:51:57 compute-0 podman[246771]: 2025-12-01 20:51:57.831464719 +0000 UTC m=+0.347227802 container remove 7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_williamson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 01 20:51:57 compute-0 systemd[1]: libpod-conmon-7f05ae460998f9fbcc09c1effc9101a7b50afa4ad3940bad4fbb6af874e65667.scope: Deactivated successfully.
Dec 01 20:51:58 compute-0 podman[246811]: 2025-12-01 20:51:58.044889408 +0000 UTC m=+0.058912505 container create e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:51:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:51:58 compute-0 systemd[1]: Started libpod-conmon-e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7.scope.
Dec 01 20:51:58 compute-0 podman[246811]: 2025-12-01 20:51:58.011254017 +0000 UTC m=+0.025277134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:51:58 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b542a389e559af15a8a8729877f66d9b5728478dd1ac5208e3fe8d5faa0bcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b542a389e559af15a8a8729877f66d9b5728478dd1ac5208e3fe8d5faa0bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b542a389e559af15a8a8729877f66d9b5728478dd1ac5208e3fe8d5faa0bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b542a389e559af15a8a8729877f66d9b5728478dd1ac5208e3fe8d5faa0bcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b542a389e559af15a8a8729877f66d9b5728478dd1ac5208e3fe8d5faa0bcd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:58 compute-0 podman[246811]: 2025-12-01 20:51:58.146981039 +0000 UTC m=+0.161004176 container init e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 20:51:58 compute-0 podman[246811]: 2025-12-01 20:51:58.153539772 +0000 UTC m=+0.167562869 container start e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:51:58 compute-0 podman[246811]: 2025-12-01 20:51:58.201197798 +0000 UTC m=+0.215220915 container attach e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:51:58 compute-0 ceph-mon[75880]: pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.600 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.601 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.601 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.601 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:51:58 compute-0 goofy_nightingale[246827]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:51:58 compute-0 goofy_nightingale[246827]: --> All data devices are unavailable
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.642 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.642 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.643 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.643 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.643 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.643 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.643 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.643 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:51:58 compute-0 nova_compute[244568]: 2025-12-01 20:51:58.644 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:51:58 compute-0 systemd[1]: libpod-e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7.scope: Deactivated successfully.
Dec 01 20:51:58 compute-0 podman[246811]: 2025-12-01 20:51:58.654416692 +0000 UTC m=+0.668439799 container died e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8b542a389e559af15a8a8729877f66d9b5728478dd1ac5208e3fe8d5faa0bcd-merged.mount: Deactivated successfully.
Dec 01 20:51:58 compute-0 podman[246811]: 2025-12-01 20:51:58.695461632 +0000 UTC m=+0.709484729 container remove e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:51:58 compute-0 systemd[1]: libpod-conmon-e8e17ad1b0394f6a12ae1fe3eedb60cf7ba8bc7c8a3f57533f361b270a2305c7.scope: Deactivated successfully.
Dec 01 20:51:58 compute-0 sudo[246735]: pam_unix(sudo:session): session closed for user root
Dec 01 20:51:58 compute-0 sudo[246858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:51:58 compute-0 sudo[246858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:51:58 compute-0 sudo[246858]: pam_unix(sudo:session): session closed for user root
Dec 01 20:51:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:51:58 compute-0 sudo[246883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:51:58 compute-0 sudo[246883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.105 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.106 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.106 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.106 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.107 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:51:59 compute-0 podman[246921]: 2025-12-01 20:51:59.133468875 +0000 UTC m=+0.039482993 container create 8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:51:59 compute-0 systemd[1]: Started libpod-conmon-8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f.scope.
Dec 01 20:51:59 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:51:59 compute-0 podman[246921]: 2025-12-01 20:51:59.2007943 +0000 UTC m=+0.106808458 container init 8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:51:59 compute-0 podman[246921]: 2025-12-01 20:51:59.208913671 +0000 UTC m=+0.114927799 container start 8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_lumiere, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:51:59 compute-0 podman[246921]: 2025-12-01 20:51:59.212603486 +0000 UTC m=+0.118617614 container attach 8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_lumiere, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:51:59 compute-0 podman[246921]: 2025-12-01 20:51:59.118333656 +0000 UTC m=+0.024347804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:51:59 compute-0 reverent_lumiere[246939]: 167 167
Dec 01 20:51:59 compute-0 systemd[1]: libpod-8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f.scope: Deactivated successfully.
Dec 01 20:51:59 compute-0 podman[246921]: 2025-12-01 20:51:59.217660242 +0000 UTC m=+0.123674390 container died 8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f433d5f3371ab71da5a73c9faa6fd7aad5b871daf83d9342087cb6554235c61-merged.mount: Deactivated successfully.
Dec 01 20:51:59 compute-0 podman[246921]: 2025-12-01 20:51:59.27086701 +0000 UTC m=+0.176881138 container remove 8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_lumiere, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:51:59 compute-0 systemd[1]: libpod-conmon-8b8dac5f259f9171046a1602f8c7318b0c5dbf52d6eb2d8027200cccefa44b0f.scope: Deactivated successfully.
Dec 01 20:51:59 compute-0 podman[246979]: 2025-12-01 20:51:59.436014474 +0000 UTC m=+0.037649808 container create fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:51:59 compute-0 systemd[1]: Started libpod-conmon-fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da.scope.
Dec 01 20:51:59 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6510136a3ece5cd3cc9a58c12f61194a2b909a29c4600aa5c2e0bed75924d2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6510136a3ece5cd3cc9a58c12f61194a2b909a29c4600aa5c2e0bed75924d2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6510136a3ece5cd3cc9a58c12f61194a2b909a29c4600aa5c2e0bed75924d2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6510136a3ece5cd3cc9a58c12f61194a2b909a29c4600aa5c2e0bed75924d2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:51:59 compute-0 podman[246979]: 2025-12-01 20:51:59.418365997 +0000 UTC m=+0.020001361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:51:59 compute-0 podman[246979]: 2025-12-01 20:51:59.518365673 +0000 UTC m=+0.120001037 container init fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:51:59 compute-0 podman[246979]: 2025-12-01 20:51:59.526756903 +0000 UTC m=+0.128392277 container start fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 01 20:51:59 compute-0 podman[246979]: 2025-12-01 20:51:59.531639345 +0000 UTC m=+0.133274679 container attach fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 01 20:51:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:51:59 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213847923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.729 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:51:59 compute-0 youthful_cannon[246996]: {
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:     "0": [
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:         {
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "devices": [
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "/dev/loop3"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             ],
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_name": "ceph_lv0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_size": "21470642176",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "name": "ceph_lv0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "tags": {
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cluster_name": "ceph",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.crush_device_class": "",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.encrypted": "0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.objectstore": "bluestore",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osd_id": "0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.type": "block",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.vdo": "0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.with_tpm": "0"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             },
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "type": "block",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "vg_name": "ceph_vg0"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:         }
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:     ],
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:     "1": [
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:         {
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "devices": [
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "/dev/loop4"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             ],
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_name": "ceph_lv1",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_size": "21470642176",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "name": "ceph_lv1",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "tags": {
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cluster_name": "ceph",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.crush_device_class": "",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.encrypted": "0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.objectstore": "bluestore",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osd_id": "1",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.type": "block",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.vdo": "0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.with_tpm": "0"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             },
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "type": "block",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "vg_name": "ceph_vg1"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:         }
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:     ],
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:     "2": [
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:         {
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "devices": [
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "/dev/loop5"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             ],
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_name": "ceph_lv2",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_size": "21470642176",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "name": "ceph_lv2",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "tags": {
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.cluster_name": "ceph",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.crush_device_class": "",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.encrypted": "0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.objectstore": "bluestore",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osd_id": "2",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.type": "block",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.vdo": "0",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:                 "ceph.with_tpm": "0"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             },
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "type": "block",
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:             "vg_name": "ceph_vg2"
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:         }
Dec 01 20:51:59 compute-0 youthful_cannon[246996]:     ]
Dec 01 20:51:59 compute-0 youthful_cannon[246996]: }
Dec 01 20:51:59 compute-0 systemd[1]: libpod-fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da.scope: Deactivated successfully.
Dec 01 20:51:59 compute-0 podman[246979]: 2025-12-01 20:51:59.858744273 +0000 UTC m=+0.460379607 container died fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec 01 20:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6510136a3ece5cd3cc9a58c12f61194a2b909a29c4600aa5c2e0bed75924d2f-merged.mount: Deactivated successfully.
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.919 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.921 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5263MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.921 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:51:59 compute-0 nova_compute[244568]: 2025-12-01 20:51:59.921 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:51:59 compute-0 podman[246979]: 2025-12-01 20:51:59.943893019 +0000 UTC m=+0.545528353 container remove fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 01 20:51:59 compute-0 systemd[1]: libpod-conmon-fe78fef9d79c4ce5d92f759375250f1057eb9654817f51d87e4f7b5cf7bfe8da.scope: Deactivated successfully.
Dec 01 20:52:00 compute-0 sudo[246883]: pam_unix(sudo:session): session closed for user root
Dec 01 20:52:00 compute-0 sudo[247020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:52:00 compute-0 sudo[247020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:52:00 compute-0 sudo[247020]: pam_unix(sudo:session): session closed for user root
Dec 01 20:52:00 compute-0 nova_compute[244568]: 2025-12-01 20:52:00.097 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:52:00 compute-0 nova_compute[244568]: 2025-12-01 20:52:00.098 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:52:00 compute-0 nova_compute[244568]: 2025-12-01 20:52:00.110 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:52:00 compute-0 sudo[247045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:52:00 compute-0 sudo[247045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:52:00 compute-0 ceph-mon[75880]: pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/213847923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:52:00 compute-0 podman[247102]: 2025-12-01 20:52:00.455521062 +0000 UTC m=+0.057519723 container create 8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_einstein, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:52:00 compute-0 systemd[1]: Started libpod-conmon-8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f.scope.
Dec 01 20:52:00 compute-0 podman[247102]: 2025-12-01 20:52:00.427952499 +0000 UTC m=+0.029951210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:52:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:52:00 compute-0 podman[247102]: 2025-12-01 20:52:00.554876008 +0000 UTC m=+0.156874669 container init 8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_einstein, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:52:00 compute-0 podman[247102]: 2025-12-01 20:52:00.561632888 +0000 UTC m=+0.163631559 container start 8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_einstein, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:52:00 compute-0 podman[247102]: 2025-12-01 20:52:00.565721894 +0000 UTC m=+0.167720545 container attach 8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:52:00 compute-0 systemd[1]: libpod-8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f.scope: Deactivated successfully.
Dec 01 20:52:00 compute-0 hardcore_einstein[247118]: 167 167
Dec 01 20:52:00 compute-0 conmon[247118]: conmon 8da7707cadd8855b5d85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f.scope/container/memory.events
Dec 01 20:52:00 compute-0 podman[247102]: 2025-12-01 20:52:00.573356131 +0000 UTC m=+0.175354792 container died 8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f256f2feb49a6de3d74903b4157f33da41e0707ed49fd2306c83f00abb4614b5-merged.mount: Deactivated successfully.
Dec 01 20:52:00 compute-0 podman[247102]: 2025-12-01 20:52:00.609856831 +0000 UTC m=+0.211855462 container remove 8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_einstein, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:52:00 compute-0 systemd[1]: libpod-conmon-8da7707cadd8855b5d857fe6891e951ba5a9862df68defbd9b03d2685974453f.scope: Deactivated successfully.
Dec 01 20:52:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:52:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303626911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:52:00 compute-0 nova_compute[244568]: 2025-12-01 20:52:00.701 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:52:00 compute-0 nova_compute[244568]: 2025-12-01 20:52:00.707 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:52:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:00 compute-0 podman[247143]: 2025-12-01 20:52:00.73997525 +0000 UTC m=+0.019901428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:52:00 compute-0 podman[247143]: 2025-12-01 20:52:00.922170371 +0000 UTC m=+0.202096509 container create ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_knuth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 01 20:52:01 compute-0 systemd[1]: Started libpod-conmon-ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc.scope.
Dec 01 20:52:01 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8276d034d87158a565dbfebe14bb8e4edd8bc2f0488c8eda8137ee8db620998c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8276d034d87158a565dbfebe14bb8e4edd8bc2f0488c8eda8137ee8db620998c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8276d034d87158a565dbfebe14bb8e4edd8bc2f0488c8eda8137ee8db620998c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8276d034d87158a565dbfebe14bb8e4edd8bc2f0488c8eda8137ee8db620998c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:52:01 compute-0 podman[247143]: 2025-12-01 20:52:01.257740452 +0000 UTC m=+0.537666590 container init ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_knuth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 20:52:01 compute-0 podman[247143]: 2025-12-01 20:52:01.269216297 +0000 UTC m=+0.549142435 container start ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:52:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/303626911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:52:01 compute-0 podman[247143]: 2025-12-01 20:52:01.296562194 +0000 UTC m=+0.576488332 container attach ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:52:01 compute-0 nova_compute[244568]: 2025-12-01 20:52:01.456 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:52:01 compute-0 nova_compute[244568]: 2025-12-01 20:52:01.458 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:52:01 compute-0 nova_compute[244568]: 2025-12-01 20:52:01.458 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:52:01 compute-0 lvm[247237]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:52:01 compute-0 lvm[247238]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:52:01 compute-0 lvm[247237]: VG ceph_vg0 finished
Dec 01 20:52:01 compute-0 lvm[247238]: VG ceph_vg1 finished
Dec 01 20:52:01 compute-0 lvm[247240]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:52:01 compute-0 lvm[247240]: VG ceph_vg2 finished
Dec 01 20:52:02 compute-0 priceless_knuth[247159]: {}
Dec 01 20:52:02 compute-0 systemd[1]: libpod-ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc.scope: Deactivated successfully.
Dec 01 20:52:02 compute-0 systemd[1]: libpod-ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc.scope: Consumed 1.325s CPU time.
Dec 01 20:52:02 compute-0 podman[247143]: 2025-12-01 20:52:02.069056054 +0000 UTC m=+1.348982192 container died ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8276d034d87158a565dbfebe14bb8e4edd8bc2f0488c8eda8137ee8db620998c-merged.mount: Deactivated successfully.
Dec 01 20:52:02 compute-0 podman[247143]: 2025-12-01 20:52:02.168216915 +0000 UTC m=+1.448143053 container remove ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_knuth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:52:02 compute-0 systemd[1]: libpod-conmon-ddfbaa172212041b427e84cd2f59f697a5636c5c8ca1178029789d6b87510dbc.scope: Deactivated successfully.
Dec 01 20:52:02 compute-0 sudo[247045]: pam_unix(sudo:session): session closed for user root
Dec 01 20:52:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:52:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:52:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:52:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:52:02 compute-0 sudo[247255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:52:02 compute-0 sudo[247255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:52:02 compute-0 sudo[247255]: pam_unix(sudo:session): session closed for user root
Dec 01 20:52:02 compute-0 ceph-mon[75880]: pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:52:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:52:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:52:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3619597054' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:52:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:52:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3619597054' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:52:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:52:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:52:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:52:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:52:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:52:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:52:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3619597054' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:52:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3619597054' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:52:04 compute-0 ceph-mon[75880]: pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:05 compute-0 ceph-mon[75880]: pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:07 compute-0 ceph-mon[75880]: pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.133845) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622328133894, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 508, "total_data_size": 2117894, "memory_usage": 2165808, "flush_reason": "Manual Compaction"}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622328146668, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2058031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12119, "largest_seqno": 14267, "table_properties": {"data_size": 2048844, "index_size": 5237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 21080, "raw_average_key_size": 18, "raw_value_size": 2028413, "raw_average_value_size": 1795, "num_data_blocks": 241, "num_entries": 1130, "num_filter_entries": 1130, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764622123, "oldest_key_time": 1764622123, "file_creation_time": 1764622328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12859 microseconds, and 4641 cpu microseconds.
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.146711) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2058031 bytes OK
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.146728) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.148239) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.148253) EVENT_LOG_v1 {"time_micros": 1764622328148249, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.148272) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2107839, prev total WAL file size 2107839, number of live WAL files 2.
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.149207) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2009KB)], [32(4773KB)]
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622328149254, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 6946481, "oldest_snapshot_seqno": -1}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3350 keys, 5512214 bytes, temperature: kUnknown
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622328194861, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 5512214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5487276, "index_size": 15485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 79279, "raw_average_key_size": 23, "raw_value_size": 5424531, "raw_average_value_size": 1619, "num_data_blocks": 669, "num_entries": 3350, "num_filter_entries": 3350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.195360) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 5512214 bytes
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.196937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.8 rd, 120.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 4.7 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.7) OK, records in: 4379, records dropped: 1029 output_compression: NoCompression
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.196970) EVENT_LOG_v1 {"time_micros": 1764622328196954, "job": 14, "event": "compaction_finished", "compaction_time_micros": 45771, "compaction_time_cpu_micros": 20887, "output_level": 6, "num_output_files": 1, "total_output_size": 5512214, "num_input_records": 4379, "num_output_records": 3350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622328198286, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622328200484, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.149119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.200641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.200649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.200654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.200658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:52:08 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:52:08.200663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:52:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:10 compute-0 ceph-mon[75880]: pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:12 compute-0 ceph-mon[75880]: pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:14 compute-0 ceph-mon[75880]: pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:16 compute-0 ceph-mon[75880]: pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:52:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:52:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:17 compute-0 ceph-mon[75880]: pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:19 compute-0 ceph-mon[75880]: pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:21 compute-0 podman[247280]: 2025-12-01 20:52:21.112037318 +0000 UTC m=+0.076171970 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:52:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:52:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:52:21 compute-0 ceph-mon[75880]: pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:23 compute-0 podman[247300]: 2025-12-01 20:52:23.094199435 +0000 UTC m=+0.058898165 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 01 20:52:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:23 compute-0 podman[247301]: 2025-12-01 20:52:23.142223502 +0000 UTC m=+0.101370690 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 01 20:52:23 compute-0 ceph-mon[75880]: pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:25 compute-0 ceph-mon[75880]: pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:52:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:52:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:27 compute-0 ceph-mon[75880]: pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:28 compute-0 ceph-mgr[76174]: [devicehealth INFO root] Check health
Dec 01 20:52:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:29 compute-0 ceph-mon[75880]: pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:31 compute-0 ceph-mon[75880]: pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:52:32
Dec 01 20:52:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:52:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:52:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['images', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta']
Dec 01 20:52:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:52:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:52:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:52:33 compute-0 ceph-mon[75880]: pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 01 20:52:35 compute-0 ceph-mon[75880]: pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 01 20:52:35 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 01 20:52:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 01 20:52:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 01 20:52:36 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 01 20:52:36 compute-0 ceph-mon[75880]: osdmap e65: 3 total, 3 up, 3 in
Dec 01 20:52:38 compute-0 ceph-mon[75880]: pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:52:38 compute-0 ceph-mon[75880]: osdmap e66: 3 total, 3 up, 3 in
Dec 01 20:52:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 13 MiB data, 94 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec 01 20:52:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 01 20:52:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 01 20:52:39 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 01 20:52:40 compute-0 ceph-mon[75880]: pgmap v690: 177 pgs: 177 active+clean; 13 MiB data, 94 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Dec 01 20:52:40 compute-0 ceph-mon[75880]: osdmap e67: 3 total, 3 up, 3 in
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00020267998030341417 of space, bias 1.0, pg target 0.06080399409102425 quantized to 32 (current 32)
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3688614011827745e-06 of space, bias 4.0, pg target 0.0016426336814193293 quantized to 16 (current 16)
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:52:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 13 MiB data, 94 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Dec 01 20:52:42 compute-0 ceph-mon[75880]: pgmap v692: 177 pgs: 177 active+clean; 13 MiB data, 94 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 19 op/s
Dec 01 20:52:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 29 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 4.1 MiB/s wr, 24 op/s
Dec 01 20:52:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 01 20:52:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 01 20:52:43 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 01 20:52:44 compute-0 ceph-mon[75880]: pgmap v693: 177 pgs: 177 active+clean; 29 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 4.1 MiB/s wr, 24 op/s
Dec 01 20:52:44 compute-0 ceph-mon[75880]: osdmap e68: 3 total, 3 up, 3 in
Dec 01 20:52:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:52:44.354 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:52:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:52:44.355 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:52:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:52:44.355 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:52:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 41 MiB data, 114 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.2 MiB/s wr, 45 op/s
Dec 01 20:52:46 compute-0 ceph-mon[75880]: pgmap v695: 177 pgs: 177 active+clean; 41 MiB data, 114 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.2 MiB/s wr, 45 op/s
Dec 01 20:52:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 41 MiB data, 114 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.6 MiB/s wr, 29 op/s
Dec 01 20:52:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:48 compute-0 ceph-mon[75880]: pgmap v696: 177 pgs: 177 active+clean; 41 MiB data, 114 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.6 MiB/s wr, 29 op/s
Dec 01 20:52:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 28 op/s
Dec 01 20:52:49 compute-0 ceph-mon[75880]: pgmap v697: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 28 op/s
Dec 01 20:52:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Dec 01 20:52:51 compute-0 ceph-mon[75880]: pgmap v698: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Dec 01 20:52:52 compute-0 podman[247346]: 2025-12-01 20:52:52.103451905 +0000 UTC m=+0.062878338 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:52:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 MiB/s wr, 21 op/s
Dec 01 20:52:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:53 compute-0 ceph-mon[75880]: pgmap v699: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 MiB/s wr, 21 op/s
Dec 01 20:52:54 compute-0 podman[247365]: 2025-12-01 20:52:54.109535903 +0000 UTC m=+0.069397871 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 01 20:52:54 compute-0 podman[247366]: 2025-12-01 20:52:54.141090859 +0000 UTC m=+0.092879647 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 01 20:52:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Dec 01 20:52:55 compute-0 nova_compute[244568]: 2025-12-01 20:52:55.810 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:55 compute-0 nova_compute[244568]: 2025-12-01 20:52:55.810 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:55 compute-0 nova_compute[244568]: 2025-12-01 20:52:55.827 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:55 compute-0 nova_compute[244568]: 2025-12-01 20:52:55.828 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:55 compute-0 nova_compute[244568]: 2025-12-01 20:52:55.828 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:52:55 compute-0 nova_compute[244568]: 2025-12-01 20:52:55.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:55 compute-0 nova_compute[244568]: 2025-12-01 20:52:55.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:56 compute-0 ceph-mon[75880]: pgmap v700: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Dec 01 20:52:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 01 20:52:56 compute-0 nova_compute[244568]: 2025-12-01 20:52:56.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:56 compute-0 nova_compute[244568]: 2025-12-01 20:52:56.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:52:56 compute-0 nova_compute[244568]: 2025-12-01 20:52:56.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.001 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.001 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.001 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.002 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.027 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.028 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.028 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.028 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.028 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:52:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:52:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142986985' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.578 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.736 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.737 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5282MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.738 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.738 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.822 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.822 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:52:57 compute-0 nova_compute[244568]: 2025-12-01 20:52:57.840 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:52:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:52:58 compute-0 ceph-mon[75880]: pgmap v701: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 01 20:52:58 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3142986985' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:52:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:52:58 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598928945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:52:58 compute-0 nova_compute[244568]: 2025-12-01 20:52:58.390 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:52:58 compute-0 nova_compute[244568]: 2025-12-01 20:52:58.396 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:52:58 compute-0 nova_compute[244568]: 2025-12-01 20:52:58.413 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:52:58 compute-0 nova_compute[244568]: 2025-12-01 20:52:58.415 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:52:58 compute-0 nova_compute[244568]: 2025-12-01 20:52:58.415 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:52:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 01 20:52:59 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2598928945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:53:00 compute-0 ceph-mon[75880]: pgmap v702: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 01 20:53:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:01 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:53:01.466 155855 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:ee:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2e:39:ea:af:48:04'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 01 20:53:01 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:53:01.467 155855 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 01 20:53:02 compute-0 ceph-mon[75880]: pgmap v703: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:53:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190514921' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:53:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:53:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190514921' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:53:02 compute-0 sudo[247453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:53:02 compute-0 sudo[247453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:02 compute-0 sudo[247453]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:02 compute-0 sudo[247478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 01 20:53:02 compute-0 sudo[247478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:02 compute-0 sudo[247478]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:53:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:53:02 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:02 compute-0 sudo[247523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:53:02 compute-0 sudo[247523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:02 compute-0 sudo[247523]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:02 compute-0 sudo[247548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:53:02 compute-0 sudo[247548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:53:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:53:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:53:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:53:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:53:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:53:03 compute-0 sudo[247548]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:53:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:53:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:53:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:53:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:53:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2190514921' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:53:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2190514921' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:53:03 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:03 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:53:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:53:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:53:03 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:53:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:53:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:53:03 compute-0 sudo[247604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:53:03 compute-0 sudo[247604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:03 compute-0 sudo[247604]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:03 compute-0 sudo[247629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:53:03 compute-0 sudo[247629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:04 compute-0 podman[247667]: 2025-12-01 20:53:04.145025406 +0000 UTC m=+0.068767751 container create 1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bose, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 01 20:53:04 compute-0 podman[247667]: 2025-12-01 20:53:04.096859804 +0000 UTC m=+0.020602159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:53:04 compute-0 systemd[1]: Started libpod-conmon-1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7.scope.
Dec 01 20:53:04 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:53:04 compute-0 podman[247667]: 2025-12-01 20:53:04.414741007 +0000 UTC m=+0.338483352 container init 1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:53:04 compute-0 podman[247667]: 2025-12-01 20:53:04.424167589 +0000 UTC m=+0.347909934 container start 1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:53:04 compute-0 systemd[1]: libpod-1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7.scope: Deactivated successfully.
Dec 01 20:53:04 compute-0 sharp_bose[247684]: 167 167
Dec 01 20:53:04 compute-0 conmon[247684]: conmon 1f9cbac2b1975f695f9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7.scope/container/memory.events
Dec 01 20:53:04 compute-0 podman[247667]: 2025-12-01 20:53:04.481016709 +0000 UTC m=+0.404759064 container attach 1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bose, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:53:04 compute-0 podman[247667]: 2025-12-01 20:53:04.481734862 +0000 UTC m=+0.405477347 container died 1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bose, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:53:04 compute-0 ceph-mon[75880]: pgmap v704: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:53:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:53:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:53:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:53:04 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a19745bed8ad8516f38c20c1aa4ee06a7cc9bf09cd87c57624ea95ff4479be8-merged.mount: Deactivated successfully.
Dec 01 20:53:04 compute-0 podman[247667]: 2025-12-01 20:53:04.597235628 +0000 UTC m=+0.520977943 container remove 1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:53:04 compute-0 systemd[1]: libpod-conmon-1f9cbac2b1975f695f9bde9d64b89fd83ea6f52fd35ea084814192d679d8b9c7.scope: Deactivated successfully.
Dec 01 20:53:04 compute-0 podman[247708]: 2025-12-01 20:53:04.775310092 +0000 UTC m=+0.043372294 container create e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:53:04 compute-0 systemd[1]: Started libpod-conmon-e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6.scope.
Dec 01 20:53:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:04 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:53:04 compute-0 podman[247708]: 2025-12-01 20:53:04.756888231 +0000 UTC m=+0.024950453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8bdfa0e60d5be6b8d58fafcbd2fea727954587cce27d211f79129fbfed5e9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8bdfa0e60d5be6b8d58fafcbd2fea727954587cce27d211f79129fbfed5e9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8bdfa0e60d5be6b8d58fafcbd2fea727954587cce27d211f79129fbfed5e9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8bdfa0e60d5be6b8d58fafcbd2fea727954587cce27d211f79129fbfed5e9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8bdfa0e60d5be6b8d58fafcbd2fea727954587cce27d211f79129fbfed5e9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:04 compute-0 podman[247708]: 2025-12-01 20:53:04.870421047 +0000 UTC m=+0.138483269 container init e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:53:04 compute-0 podman[247708]: 2025-12-01 20:53:04.881837581 +0000 UTC m=+0.149899773 container start e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:53:04 compute-0 podman[247708]: 2025-12-01 20:53:04.885319918 +0000 UTC m=+0.153382180 container attach e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 01 20:53:05 compute-0 romantic_perlman[247726]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:53:05 compute-0 romantic_perlman[247726]: --> All data devices are unavailable
Dec 01 20:53:05 compute-0 systemd[1]: libpod-e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6.scope: Deactivated successfully.
Dec 01 20:53:05 compute-0 podman[247708]: 2025-12-01 20:53:05.446160224 +0000 UTC m=+0.714222476 container died e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb8bdfa0e60d5be6b8d58fafcbd2fea727954587cce27d211f79129fbfed5e9f-merged.mount: Deactivated successfully.
Dec 01 20:53:05 compute-0 podman[247708]: 2025-12-01 20:53:05.502190839 +0000 UTC m=+0.770253041 container remove e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 01 20:53:05 compute-0 systemd[1]: libpod-conmon-e2d47aec08a7dbbc9d006511faddbe573d4c0a0c416c37de633dbe8c1030c3d6.scope: Deactivated successfully.
Dec 01 20:53:05 compute-0 sudo[247629]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:05 compute-0 ceph-mon[75880]: pgmap v705: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:05 compute-0 sudo[247760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:53:05 compute-0 sudo[247760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:05 compute-0 sudo[247760]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:05 compute-0 sudo[247785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:53:05 compute-0 sudo[247785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:05 compute-0 podman[247822]: 2025-12-01 20:53:05.999024364 +0000 UTC m=+0.035473730 container create 687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 20:53:06 compute-0 systemd[1]: Started libpod-conmon-687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751.scope.
Dec 01 20:53:06 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:53:06 compute-0 podman[247822]: 2025-12-01 20:53:06.061701514 +0000 UTC m=+0.098150910 container init 687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:53:06 compute-0 podman[247822]: 2025-12-01 20:53:06.068229947 +0000 UTC m=+0.104679313 container start 687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:53:06 compute-0 podman[247822]: 2025-12-01 20:53:06.072892461 +0000 UTC m=+0.109341847 container attach 687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_goldstine, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:53:06 compute-0 practical_goldstine[247838]: 167 167
Dec 01 20:53:06 compute-0 systemd[1]: libpod-687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751.scope: Deactivated successfully.
Dec 01 20:53:06 compute-0 podman[247822]: 2025-12-01 20:53:06.075958896 +0000 UTC m=+0.112408262 container died 687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:53:06 compute-0 podman[247822]: 2025-12-01 20:53:05.982684618 +0000 UTC m=+0.019134004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fface35684743e28dda1a44d808c76ddc6e5c648d46ea603bb5348c04917815-merged.mount: Deactivated successfully.
Dec 01 20:53:06 compute-0 podman[247822]: 2025-12-01 20:53:06.119452043 +0000 UTC m=+0.155901409 container remove 687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:53:06 compute-0 systemd[1]: libpod-conmon-687a8e7b8d17fbce6f24b17e64e9868e6a1f70bca69e926ab9b9b071a24f2751.scope: Deactivated successfully.
Dec 01 20:53:06 compute-0 podman[247861]: 2025-12-01 20:53:06.280885011 +0000 UTC m=+0.037518063 container create 2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:53:06 compute-0 systemd[1]: Started libpod-conmon-2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4.scope.
Dec 01 20:53:06 compute-0 podman[247861]: 2025-12-01 20:53:06.263572765 +0000 UTC m=+0.020205847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:53:06 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2775c9df9705a108540a51d67c94e9107dc282e4d4183b84b7549780701cefd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2775c9df9705a108540a51d67c94e9107dc282e4d4183b84b7549780701cefd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2775c9df9705a108540a51d67c94e9107dc282e4d4183b84b7549780701cefd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2775c9df9705a108540a51d67c94e9107dc282e4d4183b84b7549780701cefd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:06 compute-0 podman[247861]: 2025-12-01 20:53:06.387821803 +0000 UTC m=+0.144454905 container init 2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:53:06 compute-0 podman[247861]: 2025-12-01 20:53:06.39549538 +0000 UTC m=+0.152128472 container start 2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:53:06 compute-0 podman[247861]: 2025-12-01 20:53:06.399322449 +0000 UTC m=+0.155955521 container attach 2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]: {
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:     "0": [
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:         {
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "devices": [
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "/dev/loop3"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             ],
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_name": "ceph_lv0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_size": "21470642176",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "name": "ceph_lv0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "tags": {
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cluster_name": "ceph",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.crush_device_class": "",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.encrypted": "0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.objectstore": "bluestore",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osd_id": "0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.type": "block",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.vdo": "0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.with_tpm": "0"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             },
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "type": "block",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "vg_name": "ceph_vg0"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:         }
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:     ],
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:     "1": [
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:         {
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "devices": [
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "/dev/loop4"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             ],
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_name": "ceph_lv1",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_size": "21470642176",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "name": "ceph_lv1",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "tags": {
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cluster_name": "ceph",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.crush_device_class": "",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.encrypted": "0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.objectstore": "bluestore",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osd_id": "1",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.type": "block",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.vdo": "0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.with_tpm": "0"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             },
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "type": "block",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "vg_name": "ceph_vg1"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:         }
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:     ],
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:     "2": [
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:         {
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "devices": [
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "/dev/loop5"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             ],
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_name": "ceph_lv2",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_size": "21470642176",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "name": "ceph_lv2",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "tags": {
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.cluster_name": "ceph",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.crush_device_class": "",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.encrypted": "0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.objectstore": "bluestore",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osd_id": "2",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.type": "block",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.vdo": "0",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:                 "ceph.with_tpm": "0"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             },
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "type": "block",
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:             "vg_name": "ceph_vg2"
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:         }
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]:     ]
Dec 01 20:53:06 compute-0 optimistic_chatterjee[247877]: }
Dec 01 20:53:06 compute-0 systemd[1]: libpod-2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4.scope: Deactivated successfully.
Dec 01 20:53:06 compute-0 podman[247861]: 2025-12-01 20:53:06.740551665 +0000 UTC m=+0.497184727 container died 2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 20:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2775c9df9705a108540a51d67c94e9107dc282e4d4183b84b7549780701cefd-merged.mount: Deactivated successfully.
Dec 01 20:53:06 compute-0 podman[247861]: 2025-12-01 20:53:06.786213378 +0000 UTC m=+0.542846430 container remove 2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 01 20:53:06 compute-0 systemd[1]: libpod-conmon-2b91d403bdff4709cab64b330e48053b4a2223bdc1aa69857f1767ef8bb18aa4.scope: Deactivated successfully.
Dec 01 20:53:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:06 compute-0 sudo[247785]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:06 compute-0 sudo[247898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:53:06 compute-0 sudo[247898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:06 compute-0 sudo[247898]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:07 compute-0 sudo[247923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:53:07 compute-0 sudo[247923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:07 compute-0 podman[247961]: 2025-12-01 20:53:07.410647643 +0000 UTC m=+0.066879421 container create 7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_euler, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 01 20:53:07 compute-0 systemd[1]: Started libpod-conmon-7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1.scope.
Dec 01 20:53:07 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:53:07.469 155855 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=84a1d907-d341-4608-b17a-1f738619ea16, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 01 20:53:07 compute-0 podman[247961]: 2025-12-01 20:53:07.39081673 +0000 UTC m=+0.047048488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:53:07 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:53:07 compute-0 podman[247961]: 2025-12-01 20:53:07.529016999 +0000 UTC m=+0.185248737 container init 7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:53:07 compute-0 podman[247961]: 2025-12-01 20:53:07.539458312 +0000 UTC m=+0.195690050 container start 7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:53:07 compute-0 podman[247961]: 2025-12-01 20:53:07.543314622 +0000 UTC m=+0.199546380 container attach 7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 01 20:53:07 compute-0 relaxed_euler[247978]: 167 167
Dec 01 20:53:07 compute-0 systemd[1]: libpod-7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1.scope: Deactivated successfully.
Dec 01 20:53:07 compute-0 podman[247961]: 2025-12-01 20:53:07.549631057 +0000 UTC m=+0.205862825 container died 7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 01 20:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-52cc7e0d5105b5dcc0af7fd8c12ee9828d4a2f213aa78a7c373b31ad75ea21cc-merged.mount: Deactivated successfully.
Dec 01 20:53:07 compute-0 podman[247961]: 2025-12-01 20:53:07.596785127 +0000 UTC m=+0.253016875 container remove 7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 01 20:53:07 compute-0 systemd[1]: libpod-conmon-7a47c553f2fc81478b18dff6fa14a83188402941f879cd5806b3d62647f736b1.scope: Deactivated successfully.
Dec 01 20:53:07 compute-0 podman[248001]: 2025-12-01 20:53:07.822327331 +0000 UTC m=+0.068029367 container create 76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:53:07 compute-0 systemd[1]: Started libpod-conmon-76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3.scope.
Dec 01 20:53:07 compute-0 podman[248001]: 2025-12-01 20:53:07.794294603 +0000 UTC m=+0.039996679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:53:07 compute-0 ceph-mon[75880]: pgmap v706: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:07 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720f76c8540116f310e5820bb4230ada9d7e9121268f58376a34b600f27c5fd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720f76c8540116f310e5820bb4230ada9d7e9121268f58376a34b600f27c5fd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720f76c8540116f310e5820bb4230ada9d7e9121268f58376a34b600f27c5fd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720f76c8540116f310e5820bb4230ada9d7e9121268f58376a34b600f27c5fd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:53:07 compute-0 podman[248001]: 2025-12-01 20:53:07.933990689 +0000 UTC m=+0.179692765 container init 76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:53:07 compute-0 podman[248001]: 2025-12-01 20:53:07.945524496 +0000 UTC m=+0.191226522 container start 76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_elgamal, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 01 20:53:07 compute-0 podman[248001]: 2025-12-01 20:53:07.94953259 +0000 UTC m=+0.195234666 container attach 76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_elgamal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:53:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:08 compute-0 lvm[248098]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:53:08 compute-0 lvm[248099]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:53:08 compute-0 lvm[248099]: VG ceph_vg2 finished
Dec 01 20:53:08 compute-0 lvm[248098]: VG ceph_vg1 finished
Dec 01 20:53:08 compute-0 lvm[248095]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:53:08 compute-0 lvm[248095]: VG ceph_vg0 finished
Dec 01 20:53:08 compute-0 priceless_elgamal[248018]: {}
Dec 01 20:53:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:08 compute-0 systemd[1]: libpod-76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3.scope: Deactivated successfully.
Dec 01 20:53:08 compute-0 podman[248001]: 2025-12-01 20:53:08.853502191 +0000 UTC m=+1.099204197 container died 76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:53:08 compute-0 systemd[1]: libpod-76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3.scope: Consumed 1.435s CPU time.
Dec 01 20:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-720f76c8540116f310e5820bb4230ada9d7e9121268f58376a34b600f27c5fd3-merged.mount: Deactivated successfully.
Dec 01 20:53:08 compute-0 podman[248001]: 2025-12-01 20:53:08.904300214 +0000 UTC m=+1.150002210 container remove 76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:53:08 compute-0 systemd[1]: libpod-conmon-76e24930ba14fb8618498a989d309da2ba4cd880668aa9c77181c295cd0070b3.scope: Deactivated successfully.
Dec 01 20:53:08 compute-0 sudo[247923]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:53:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:53:08 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:09 compute-0 sudo[248112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:53:09 compute-0 sudo[248112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:53:09 compute-0 sudo[248112]: pam_unix(sudo:session): session closed for user root
Dec 01 20:53:10 compute-0 ceph-mon[75880]: pgmap v707: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:10 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:10 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:53:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:11 compute-0 ceph-mon[75880]: pgmap v708: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:13 compute-0 ceph-mon[75880]: pgmap v709: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:15 compute-0 ceph-mon[75880]: pgmap v710: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 01 20:53:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 01 20:53:16 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 01 20:53:17 compute-0 ceph-mon[75880]: pgmap v711: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:53:17 compute-0 ceph-mon[75880]: osdmap e69: 3 total, 3 up, 3 in
Dec 01 20:53:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 01 20:53:19 compute-0 ceph-mon[75880]: pgmap v713: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 01 20:53:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 01 20:53:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 01 20:53:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 01 20:53:20 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 01 20:53:21 compute-0 ceph-mon[75880]: pgmap v714: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 01 20:53:21 compute-0 ceph-mon[75880]: osdmap e70: 3 total, 3 up, 3 in
Dec 01 20:53:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 01 20:53:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 01 20:53:21 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 01 20:53:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 172 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 9.3 MiB/s wr, 90 op/s
Dec 01 20:53:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 01 20:53:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 01 20:53:22 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 01 20:53:22 compute-0 ceph-mon[75880]: osdmap e71: 3 total, 3 up, 3 in
Dec 01 20:53:23 compute-0 podman[248139]: 2025-12-01 20:53:23.129175909 +0000 UTC m=+0.078263694 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 01 20:53:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:23 compute-0 ceph-mon[75880]: pgmap v717: 177 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 172 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 9.3 MiB/s wr, 90 op/s
Dec 01 20:53:23 compute-0 ceph-mon[75880]: osdmap e72: 3 total, 3 up, 3 in
Dec 01 20:53:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 168 active+clean; 41 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 19 MiB/s wr, 147 op/s
Dec 01 20:53:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 01 20:53:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 01 20:53:24 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 01 20:53:25 compute-0 podman[248160]: 2025-12-01 20:53:25.120323144 +0000 UTC m=+0.080660709 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 01 20:53:25 compute-0 podman[248161]: 2025-12-01 20:53:25.125293748 +0000 UTC m=+0.082899978 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 01 20:53:25 compute-0 ceph-mon[75880]: pgmap v719: 177 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 168 active+clean; 41 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 19 MiB/s wr, 147 op/s
Dec 01 20:53:25 compute-0 ceph-mon[75880]: osdmap e73: 3 total, 3 up, 3 in
Dec 01 20:53:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 01 20:53:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 01 20:53:26 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 01 20:53:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 168 active+clean; 41 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 23 MiB/s wr, 181 op/s
Dec 01 20:53:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 01 20:53:27 compute-0 ceph-mon[75880]: osdmap e74: 3 total, 3 up, 3 in
Dec 01 20:53:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 01 20:53:27 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 01 20:53:28 compute-0 ceph-mon[75880]: pgmap v722: 177 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 168 active+clean; 41 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 23 MiB/s wr, 181 op/s
Dec 01 20:53:28 compute-0 ceph-mon[75880]: osdmap e75: 3 total, 3 up, 3 in
Dec 01 20:53:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 9.6 MiB/s wr, 279 op/s
Dec 01 20:53:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 01 20:53:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 01 20:53:29 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 01 20:53:30 compute-0 ceph-mon[75880]: pgmap v724: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 9.6 MiB/s wr, 279 op/s
Dec 01 20:53:30 compute-0 ceph-mon[75880]: osdmap e76: 3 total, 3 up, 3 in
Dec 01 20:53:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 01 20:53:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 01 20:53:30 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 01 20:53:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 14 KiB/s wr, 236 op/s
Dec 01 20:53:31 compute-0 ceph-mon[75880]: osdmap e77: 3 total, 3 up, 3 in
Dec 01 20:53:32 compute-0 ceph-mon[75880]: pgmap v727: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 14 KiB/s wr, 236 op/s
Dec 01 20:53:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:53:32
Dec 01 20:53:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:53:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:53:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'volumes', 'vms']
Dec 01 20:53:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:53:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 13 KiB/s wr, 205 op/s
Dec 01 20:53:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 01 20:53:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 01 20:53:33 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:53:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:53:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 01 20:53:34 compute-0 ceph-mon[75880]: pgmap v728: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 13 KiB/s wr, 205 op/s
Dec 01 20:53:34 compute-0 ceph-mon[75880]: osdmap e78: 3 total, 3 up, 3 in
Dec 01 20:53:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 01 20:53:34 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 01 20:53:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.6 KiB/s wr, 56 op/s
Dec 01 20:53:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 01 20:53:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 01 20:53:35 compute-0 ceph-mon[75880]: osdmap e79: 3 total, 3 up, 3 in
Dec 01 20:53:35 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 01 20:53:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 01 20:53:36 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 01 20:53:36 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 01 20:53:36 compute-0 ceph-mon[75880]: pgmap v731: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.6 KiB/s wr, 56 op/s
Dec 01 20:53:36 compute-0 ceph-mon[75880]: osdmap e80: 3 total, 3 up, 3 in
Dec 01 20:53:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Dec 01 20:53:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 01 20:53:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 01 20:53:37 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 01 20:53:37 compute-0 ceph-mon[75880]: osdmap e81: 3 total, 3 up, 3 in
Dec 01 20:53:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 01 20:53:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 01 20:53:38 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 01 20:53:38 compute-0 ceph-mon[75880]: pgmap v734: 177 pgs: 177 active+clean; 41 MiB data, 123 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Dec 01 20:53:38 compute-0 ceph-mon[75880]: osdmap e82: 3 total, 3 up, 3 in
Dec 01 20:53:38 compute-0 ceph-mon[75880]: osdmap e83: 3 total, 3 up, 3 in
Dec 01 20:53:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 19 KiB/s wr, 199 op/s
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.022 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.023 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.085 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 01 20:53:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 01 20:53:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 01 20:53:39 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.435 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.436 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.443 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.443 244572 INFO nova.compute.claims [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Claim successful on node compute-0.ctlplane.example.com
Dec 01 20:53:39 compute-0 nova_compute[244568]: 2025-12-01 20:53:39.605 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:53:40 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268516188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.173 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.179 244572 DEBUG nova.compute.provider_tree [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.196 244572 DEBUG nova.scheduler.client.report [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.220 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.221 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 01 20:53:40 compute-0 ceph-mon[75880]: pgmap v737: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 19 KiB/s wr, 199 op/s
Dec 01 20:53:40 compute-0 ceph-mon[75880]: osdmap e84: 3 total, 3 up, 3 in
Dec 01 20:53:40 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1268516188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.2856015638036095e-07 of space, bias 1.0, pg target 6.856804691410828e-05 quantized to 32 (current 32)
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678377020954426 of space, bias 1.0, pg target 0.20035131062863276 quantized to 32 (current 32)
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5524609399243957e-06 of space, bias 4.0, pg target 0.0018629531279092748 quantized to 16 (current 16)
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.485 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.485 244572 DEBUG nova.network.neutron [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.577 244572 INFO nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.614 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 01 20:53:40 compute-0 nova_compute[244568]: 2025-12-01 20:53:40.675 244572 INFO nova.virt.block_device [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Booting with volume cbad2b83-e395-4688-ac99-3302fd0459e1 at /dev/vda
Dec 01 20:53:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 16 KiB/s wr, 173 op/s
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.114 244572 DEBUG os_brick.utils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.115 244572 INFO oslo.privsep.daemon [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpbykm3yth/privsep.sock']
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.150 244572 DEBUG nova.network.neutron [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.151 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 01 20:53:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 01 20:53:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 01 20:53:41 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 01 20:53:41 compute-0 ceph-mon[75880]: pgmap v739: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 16 KiB/s wr, 173 op/s
Dec 01 20:53:41 compute-0 ceph-mon[75880]: osdmap e85: 3 total, 3 up, 3 in
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.837 244572 INFO oslo.privsep.daemon [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Spawned new privsep daemon via rootwrap
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.733 248231 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.736 248231 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.738 248231 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.739 248231 INFO oslo.privsep.daemon [-] privsep daemon running as pid 248231
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.840 248231 DEBUG oslo.privsep.daemon [-] privsep: reply[e503b496-a288-47ff-b799-0b01f1879b99]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.959 248231 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.969 248231 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.970 248231 DEBUG oslo.privsep.daemon [-] privsep: reply[c3098af7-ceed-4134-a7d1-8a549ed27349]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.971 248231 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.979 248231 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.980 248231 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea75a59-8630-4b76-a02e-063521c2e310]: (4, ('InitiatorName=iqn.1994-05.com.redhat:35ddbeb8ff9f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.983 248231 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.994 248231 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.994 248231 DEBUG oslo.privsep.daemon [-] privsep: reply[02950f5a-89b2-44dc-ad16-a896d2f80672]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.997 248231 DEBUG oslo.privsep.daemon [-] privsep: reply[8da81c9e-233d-4fcd-bd49-cd0a78907f8c]: (4, '6d7269d0-ae07-4538-adba-52753671c0ef') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 01 20:53:41 compute-0 nova_compute[244568]: 2025-12-01 20:53:41.998 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:42 compute-0 nova_compute[244568]: 2025-12-01 20:53:42.025 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:42 compute-0 nova_compute[244568]: 2025-12-01 20:53:42.031 244572 DEBUG os_brick.initiator.connectors.lightos [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 01 20:53:42 compute-0 nova_compute[244568]: 2025-12-01 20:53:42.032 244572 DEBUG os_brick.initiator.connectors.lightos [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 01 20:53:42 compute-0 nova_compute[244568]: 2025-12-01 20:53:42.032 244572 DEBUG os_brick.initiator.connectors.lightos [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 01 20:53:42 compute-0 nova_compute[244568]: 2025-12-01 20:53:42.033 244572 DEBUG os_brick.utils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] <== get_connector_properties: return (918ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:35ddbeb8ff9f', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '6d7269d0-ae07-4538-adba-52753671c0ef', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 01 20:53:42 compute-0 nova_compute[244568]: 2025-12-01 20:53:42.033 244572 DEBUG nova.virt.block_device [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Updating existing volume attachment record: 87a15be1-1574-4984-80c5-2dbbaebb9595 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 01 20:53:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 14 KiB/s wr, 137 op/s
Dec 01 20:53:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 01 20:53:42 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/925343971' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:53:42 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/925343971' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:53:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 01 20:53:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 01 20:53:43 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.391 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.393 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.393 244572 INFO nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Creating image(s)
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.394 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.394 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Ensure instance console log exists: /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.394 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.395 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.395 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.397 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '87a15be1-1574-4984-80c5-2dbbaebb9595', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cbad2b83-e395-4688-ac99-3302fd0459e1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cbad2b83-e395-4688-ac99-3302fd0459e1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '96a11cae-1191-4376-947d-1d6c1d5e3e6d', 'attached_at': '', 'detached_at': '', 'volume_id': 'cbad2b83-e395-4688-ac99-3302fd0459e1', 'serial': 'cbad2b83-e395-4688-ac99-3302fd0459e1'}, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.403 244572 WARNING nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.407 244572 DEBUG nova.virt.libvirt.host [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.408 244572 DEBUG nova.virt.libvirt.host [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.410 244572 DEBUG nova.virt.libvirt.host [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.411 244572 DEBUG nova.virt.libvirt.host [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.411 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.411 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:52:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed7407e-096f-4e58-b15f-394693b4d91f',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.412 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.412 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.412 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.413 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.413 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.413 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.414 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.414 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.414 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.415 244572 DEBUG nova.virt.hardware [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.443 244572 DEBUG nova.storage.rbd_utils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image 96a11cae-1191-4376-947d-1d6c1d5e3e6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.447 244572 DEBUG nova.privsep.utils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 01 20:53:43 compute-0 nova_compute[244568]: 2025-12-01 20:53:43.448 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 01 20:53:43 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449247315' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.002 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.004 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.005 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.006 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:53:44 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 01 20:53:44 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.078 244572 DEBUG nova.objects.instance [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lazy-loading 'pci_devices' on Instance uuid 96a11cae-1191-4376-947d-1d6c1d5e3e6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.093 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] End _get_guest_xml xml=<domain type="kvm">
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <uuid>96a11cae-1191-4376-947d-1d6c1d5e3e6d</uuid>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <name>instance-00000001</name>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <memory>131072</memory>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <vcpu>1</vcpu>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <metadata>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <nova:name>instance-depend-image</nova:name>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <nova:creationTime>2025-12-01 20:53:43</nova:creationTime>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <nova:flavor name="m1.nano">
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <nova:memory>128</nova:memory>
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <nova:disk>1</nova:disk>
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <nova:swap>0</nova:swap>
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <nova:ephemeral>0</nova:ephemeral>
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <nova:vcpus>1</nova:vcpus>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       </nova:flavor>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <nova:owner>
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <nova:user uuid="25f1ff3f1de64c54878ee8235abb222e">tempest-ImageDependencyTests-145084690-project-member</nova:user>
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <nova:project uuid="34d478e941f64c469a8c9150557e448e">tempest-ImageDependencyTests-145084690</nova:project>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       </nova:owner>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <nova:ports/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </nova:instance>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   </metadata>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <sysinfo type="smbios">
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <system>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <entry name="manufacturer">RDO</entry>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <entry name="product">OpenStack Compute</entry>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <entry name="serial">96a11cae-1191-4376-947d-1d6c1d5e3e6d</entry>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <entry name="uuid">96a11cae-1191-4376-947d-1d6c1d5e3e6d</entry>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <entry name="family">Virtual Machine</entry>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </system>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   </sysinfo>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <os>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <boot dev="hd"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <smbios mode="sysinfo"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   </os>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <features>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <acpi/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <apic/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <vmcoreinfo/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   </features>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <clock offset="utc">
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <timer name="pit" tickpolicy="delay"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <timer name="hpet" present="no"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   </clock>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <cpu mode="host-model" match="exact">
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <topology sockets="1" cores="1" threads="1"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   </cpu>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   <devices>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <disk type="network" device="cdrom">
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <driver type="raw" cache="none"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <source protocol="rbd" name="vms/96a11cae-1191-4376-947d-1d6c1d5e3e6d_disk.config">
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <host name="192.168.122.100" port="6789"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       </source>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <auth username="openstack">
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <secret type="ceph" uuid="dcf60a89-bba0-58b0-a1bf-d4bde723201b"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       </auth>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <target dev="sda" bus="sata"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <disk type="network" device="disk">
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <source protocol="rbd" name="volumes/volume-cbad2b83-e395-4688-ac99-3302fd0459e1">
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <host name="192.168.122.100" port="6789"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       </source>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <auth username="openstack">
Dec 01 20:53:44 compute-0 nova_compute[244568]:         <secret type="ceph" uuid="dcf60a89-bba0-58b0-a1bf-d4bde723201b"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       </auth>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <target dev="vda" bus="virtio"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <serial>cbad2b83-e395-4688-ac99-3302fd0459e1</serial>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <serial type="pty">
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <log file="/var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/console.log" append="off"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </serial>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <video>
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <model type="virtio"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </video>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <input type="tablet" bus="usb"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <rng model="virtio">
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <backend model="random">/dev/urandom</backend>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </rng>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <controller type="usb" index="0"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     <memballoon model="virtio">
Dec 01 20:53:44 compute-0 nova_compute[244568]:       <stats period="10"/>
Dec 01 20:53:44 compute-0 nova_compute[244568]:     </memballoon>
Dec 01 20:53:44 compute-0 nova_compute[244568]:   </devices>
Dec 01 20:53:44 compute-0 nova_compute[244568]: </domain>
Dec 01 20:53:44 compute-0 nova_compute[244568]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.145 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.145 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.145 244572 INFO nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Using config drive
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.162 244572 DEBUG nova.storage.rbd_utils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image 96a11cae-1191-4376-947d-1d6c1d5e3e6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:53:44 compute-0 ceph-mon[75880]: pgmap v741: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 14 KiB/s wr, 137 op/s
Dec 01 20:53:44 compute-0 ceph-mon[75880]: osdmap e86: 3 total, 3 up, 3 in
Dec 01 20:53:44 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/449247315' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:53:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:53:44.356 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:53:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:53:44.356 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:53:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:53:44.356 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:53:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 41 MiB data, 141 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.8 KiB/s wr, 58 op/s
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.945 244572 INFO nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Creating config drive at /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/disk.config
Dec 01 20:53:44 compute-0 nova_compute[244568]: 2025-12-01 20:53:44.950 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9342a5_g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:45 compute-0 nova_compute[244568]: 2025-12-01 20:53:45.077 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9342a5_g" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:45 compute-0 nova_compute[244568]: 2025-12-01 20:53:45.100 244572 DEBUG nova.storage.rbd_utils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image 96a11cae-1191-4376-947d-1d6c1d5e3e6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:53:45 compute-0 nova_compute[244568]: 2025-12-01 20:53:45.104 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/disk.config 96a11cae-1191-4376-947d-1d6c1d5e3e6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 01 20:53:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 01 20:53:45 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 01 20:53:46 compute-0 ceph-mon[75880]: pgmap v743: 177 pgs: 177 active+clean; 41 MiB data, 141 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.8 KiB/s wr, 58 op/s
Dec 01 20:53:46 compute-0 ceph-mon[75880]: osdmap e87: 3 total, 3 up, 3 in
Dec 01 20:53:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 01 20:53:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 01 20:53:46 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.331 244572 DEBUG oslo_concurrency.processutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/disk.config 96a11cae-1191-4376-947d-1d6c1d5e3e6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.331 244572 INFO nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Deleting local config drive /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d/disk.config because it was imported into RBD.
Dec 01 20:53:46 compute-0 systemd-machined[207098]: New machine qemu-1-instance-00000001.
Dec 01 20:53:46 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.846 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.846 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.847 244572 DEBUG nova.virt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Emitting event <LifecycleEvent: 1764622426.8474405, 96a11cae-1191-4376-947d-1d6c1d5e3e6d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.848 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] VM Resumed (Lifecycle Event)
Dec 01 20:53:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 41 MiB data, 141 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 8.6 KiB/s wr, 64 op/s
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.852 244572 INFO nova.virt.libvirt.driver [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Instance spawned successfully.
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.853 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.884 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.887 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.907 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.908 244572 DEBUG nova.virt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Emitting event <LifecycleEvent: 1764622426.8475308, 96a11cae-1191-4376-947d-1d6c1d5e3e6d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.908 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] VM Started (Lifecycle Event)
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.912 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.913 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.913 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.914 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.914 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.915 244572 DEBUG nova.virt.libvirt.driver [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.925 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.928 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.947 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.968 244572 INFO nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Took 3.58 seconds to spawn the instance on the hypervisor.
Dec 01 20:53:46 compute-0 nova_compute[244568]: 2025-12-01 20:53:46.968 244572 DEBUG nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:53:47 compute-0 nova_compute[244568]: 2025-12-01 20:53:47.022 244572 INFO nova.compute.manager [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Took 7.63 seconds to build instance.
Dec 01 20:53:47 compute-0 nova_compute[244568]: 2025-12-01 20:53:47.038 244572 DEBUG oslo_concurrency.lockutils [None req-d046ec6e-00dc-42d8-a5ee-103bdbd93322 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:53:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 01 20:53:47 compute-0 ceph-mon[75880]: osdmap e88: 3 total, 3 up, 3 in
Dec 01 20:53:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 01 20:53:47 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 01 20:53:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:48 compute-0 ceph-mon[75880]: pgmap v746: 177 pgs: 177 active+clean; 41 MiB data, 141 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 8.6 KiB/s wr, 64 op/s
Dec 01 20:53:48 compute-0 ceph-mon[75880]: osdmap e89: 3 total, 3 up, 3 in
Dec 01 20:53:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 34 KiB/s wr, 166 op/s
Dec 01 20:53:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 01 20:53:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 01 20:53:49 compute-0 ceph-mon[75880]: pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 34 KiB/s wr, 166 op/s
Dec 01 20:53:49 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 01 20:53:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 01 20:53:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 01 20:53:50 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 01 20:53:50 compute-0 ceph-mon[75880]: osdmap e90: 3 total, 3 up, 3 in
Dec 01 20:53:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 42 KiB/s wr, 201 op/s
Dec 01 20:53:51 compute-0 ceph-mon[75880]: osdmap e91: 3 total, 3 up, 3 in
Dec 01 20:53:51 compute-0 ceph-mon[75880]: pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 42 KiB/s wr, 201 op/s
Dec 01 20:53:51 compute-0 nova_compute[244568]: 2025-12-01 20:53:51.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:51 compute-0 nova_compute[244568]: 2025-12-01 20:53:51.959 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:51 compute-0 nova_compute[244568]: 2025-12-01 20:53:51.959 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 01 20:53:51 compute-0 nova_compute[244568]: 2025-12-01 20:53:51.990 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 01 20:53:51 compute-0 nova_compute[244568]: 2025-12-01 20:53:51.990 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:51 compute-0 nova_compute[244568]: 2025-12-01 20:53:51.991 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 01 20:53:52 compute-0 nova_compute[244568]: 2025-12-01 20:53:52.038 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 01 20:53:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 35 KiB/s wr, 219 op/s
Dec 01 20:53:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 01 20:53:52 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 01 20:53:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 01 20:53:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 01 20:53:53 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 01 20:53:54 compute-0 ceph-mon[75880]: pgmap v752: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 35 KiB/s wr, 219 op/s
Dec 01 20:53:54 compute-0 ceph-mon[75880]: osdmap e92: 3 total, 3 up, 3 in
Dec 01 20:53:54 compute-0 ceph-mon[75880]: osdmap e93: 3 total, 3 up, 3 in
Dec 01 20:53:54 compute-0 podman[248415]: 2025-12-01 20:53:54.116551124 +0000 UTC m=+0.071772353 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 01 20:53:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 9.3 KiB/s wr, 146 op/s
Dec 01 20:53:55 compute-0 nova_compute[244568]: 2025-12-01 20:53:55.049 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:55 compute-0 nova_compute[244568]: 2025-12-01 20:53:55.049 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:55 compute-0 nova_compute[244568]: 2025-12-01 20:53:55.049 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:53:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 01 20:53:56 compute-0 podman[248439]: 2025-12-01 20:53:56.10771176 +0000 UTC m=+0.063262400 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 01 20:53:56 compute-0 ceph-mon[75880]: pgmap v755: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 9.3 KiB/s wr, 146 op/s
Dec 01 20:53:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 01 20:53:56 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 01 20:53:56 compute-0 podman[248440]: 2025-12-01 20:53:56.156136949 +0000 UTC m=+0.106494878 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:53:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 8.2 KiB/s wr, 128 op/s
Dec 01 20:53:56 compute-0 nova_compute[244568]: 2025-12-01 20:53:56.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 01 20:53:57 compute-0 ceph-mon[75880]: osdmap e94: 3 total, 3 up, 3 in
Dec 01 20:53:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 01 20:53:57 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.983 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.983 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.984 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.984 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:53:57 compute-0 nova_compute[244568]: 2025-12-01 20:53:57.984 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 01 20:53:58 compute-0 ceph-mon[75880]: pgmap v757: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 8.2 KiB/s wr, 128 op/s
Dec 01 20:53:58 compute-0 ceph-mon[75880]: osdmap e95: 3 total, 3 up, 3 in
Dec 01 20:53:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 01 20:53:58 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 01 20:53:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:53:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 01 20:53:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 01 20:53:58 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 01 20:53:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:53:58 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176782101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:53:58 compute-0 nova_compute[244568]: 2025-12-01 20:53:58.538 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 10 KiB/s wr, 234 op/s
Dec 01 20:53:58 compute-0 nova_compute[244568]: 2025-12-01 20:53:58.912 244572 DEBUG nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 01 20:53:58 compute-0 nova_compute[244568]: 2025-12-01 20:53:58.912 244572 DEBUG nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.042 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.043 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5186MB free_disk=59.98813171684742GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.043 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.043 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:53:59 compute-0 ceph-mon[75880]: osdmap e96: 3 total, 3 up, 3 in
Dec 01 20:53:59 compute-0 ceph-mon[75880]: osdmap e97: 3 total, 3 up, 3 in
Dec 01 20:53:59 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/176782101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.199 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Instance 96a11cae-1191-4376-947d-1d6c1d5e3e6d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.199 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.199 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.261 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Refreshing inventories for resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.331 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Updating ProviderTree inventory for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.331 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Updating inventory in ProviderTree for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.345 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Refreshing aggregate associations for resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.367 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Refreshing trait associations for resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.401 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:53:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:53:59 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887604769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.930 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:53:59 compute-0 nova_compute[244568]: 2025-12-01 20:53:59.936 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.026 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.056 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.056 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:00 compute-0 ceph-mon[75880]: pgmap v761: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 10 KiB/s wr, 234 op/s
Dec 01 20:54:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1887604769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.336 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "b7e301b3-66f1-4486-b137-b94cb198342c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.337 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b7e301b3-66f1-4486-b137-b94cb198342c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.357 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.425 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.426 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.430 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.430 244572 INFO nova.compute.claims [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Claim successful on node compute-0.ctlplane.example.com
Dec 01 20:54:00 compute-0 nova_compute[244568]: 2025-12-01 20:54:00.532 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:54:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 8.7 KiB/s wr, 197 op/s
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.055 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.056 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.056 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:54:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:54:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493306322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.078 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.084 244572 DEBUG nova.compute.provider_tree [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.088 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.101 244572 DEBUG nova.scheduler.client.report [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.122 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.123 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.172 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.173 244572 DEBUG nova.network.neutron [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.197 244572 INFO nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.215 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 01 20:54:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1493306322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.287 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "refresh_cache-96a11cae-1191-4376-947d-1d6c1d5e3e6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.287 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquired lock "refresh_cache-96a11cae-1191-4376-947d-1d6c1d5e3e6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.288 244572 DEBUG nova.network.neutron [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.288 244572 DEBUG nova.objects.instance [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a11cae-1191-4376-947d-1d6c1d5e3e6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.308 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.310 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.311 244572 INFO nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Creating image(s)
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.338 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image b7e301b3-66f1-4486-b137-b94cb198342c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.357 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image b7e301b3-66f1-4486-b137-b94cb198342c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.378 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image b7e301b3-66f1-4486-b137-b94cb198342c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.382 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "b67e54232d204973442263f6c58fec765693c959" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.382 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b67e54232d204973442263f6c58fec765693c959" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.667 244572 DEBUG nova.network.neutron [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.667 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.769 244572 DEBUG nova.network.neutron [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.789 244572 DEBUG nova.virt.libvirt.imagebackend [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Image locations are: [{'url': 'rbd://dcf60a89-bba0-58b0-a1bf-d4bde723201b/images/c5742a51-c71d-4530-b8f6-99ff9729c30a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://dcf60a89-bba0-58b0-a1bf-d4bde723201b/images/c5742a51-c71d-4530-b8f6-99ff9729c30a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.831 244572 DEBUG nova.virt.libvirt.imagebackend [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Selected location: {'url': 'rbd://dcf60a89-bba0-58b0-a1bf-d4bde723201b/images/c5742a51-c71d-4530-b8f6-99ff9729c30a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.831 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] cloning images/c5742a51-c71d-4530-b8f6-99ff9729c30a@snap to None/b7e301b3-66f1-4486-b137-b94cb198342c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 01 20:54:01 compute-0 nova_compute[244568]: 2025-12-01 20:54:01.911 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b67e54232d204973442263f6c58fec765693c959" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.014 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] resizing rbd image b7e301b3-66f1-4486-b137-b94cb198342c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.057 244572 DEBUG nova.objects.instance [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lazy-loading 'migration_context' on Instance uuid b7e301b3-66f1-4486-b137-b94cb198342c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.073 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.074 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Ensure instance console log exists: /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.074 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.074 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.074 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.075 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='db1be56686653ff39435baf771e97769',container_format='bare',created_at=2025-12-01T20:53:57Z,direct_url=<?>,disk_format='raw',id=c5742a51-c71d-4530-b8f6-99ff9729c30a,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1670755233',owner='34d478e941f64c469a8c9150557e448e',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-01T20:53:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'size': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'image_id': 'c5742a51-c71d-4530-b8f6-99ff9729c30a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.078 244572 WARNING nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.081 244572 DEBUG nova.virt.libvirt.host [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.081 244572 DEBUG nova.virt.libvirt.host [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.084 244572 DEBUG nova.virt.libvirt.host [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.084 244572 DEBUG nova.virt.libvirt.host [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.084 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.084 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:52:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='aed7407e-096f-4e58-b15f-394693b4d91f',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='db1be56686653ff39435baf771e97769',container_format='bare',created_at=2025-12-01T20:53:57Z,direct_url=<?>,disk_format='raw',id=c5742a51-c71d-4530-b8f6-99ff9729c30a,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1670755233',owner='34d478e941f64c469a8c9150557e448e',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-01T20:53:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.085 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.085 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.085 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.085 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.085 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.085 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.085 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.086 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.086 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.086 244572 DEBUG nova.virt.hardware [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.088 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.101 244572 DEBUG nova.network.neutron [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.115 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Releasing lock "refresh_cache-96a11cae-1191-4376-947d-1d6c1d5e3e6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.115 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.116 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:02 compute-0 ceph-mon[75880]: pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 8.7 KiB/s wr, 197 op/s
Dec 01 20:54:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:54:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3095520653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:54:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:54:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3095520653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:54:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 01 20:54:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975773296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.606 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.635 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image b7e301b3-66f1-4486-b137-b94cb198342c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:54:02 compute-0 nova_compute[244568]: 2025-12-01 20:54:02.640 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:54:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 8.0 KiB/s wr, 183 op/s
Dec 01 20:54:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 01 20:54:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2619806067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.152 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.154 244572 DEBUG nova.objects.instance [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lazy-loading 'pci_devices' on Instance uuid b7e301b3-66f1-4486-b137-b94cb198342c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.177 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] End _get_guest_xml xml=<domain type="kvm">
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <uuid>b7e301b3-66f1-4486-b137-b94cb198342c</uuid>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <name>instance-00000002</name>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <memory>131072</memory>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <vcpu>1</vcpu>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <metadata>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <nova:name>instance-depend-image</nova:name>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <nova:creationTime>2025-12-01 20:54:02</nova:creationTime>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <nova:flavor name="m1.nano">
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <nova:memory>128</nova:memory>
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <nova:disk>1</nova:disk>
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <nova:swap>0</nova:swap>
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <nova:ephemeral>0</nova:ephemeral>
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <nova:vcpus>1</nova:vcpus>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       </nova:flavor>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <nova:owner>
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <nova:user uuid="25f1ff3f1de64c54878ee8235abb222e">tempest-ImageDependencyTests-145084690-project-member</nova:user>
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <nova:project uuid="34d478e941f64c469a8c9150557e448e">tempest-ImageDependencyTests-145084690</nova:project>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       </nova:owner>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <nova:root type="image" uuid="c5742a51-c71d-4530-b8f6-99ff9729c30a"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <nova:ports/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </nova:instance>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   </metadata>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <sysinfo type="smbios">
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <system>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <entry name="manufacturer">RDO</entry>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <entry name="product">OpenStack Compute</entry>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <entry name="serial">b7e301b3-66f1-4486-b137-b94cb198342c</entry>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <entry name="uuid">b7e301b3-66f1-4486-b137-b94cb198342c</entry>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <entry name="family">Virtual Machine</entry>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </system>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   </sysinfo>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <os>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <boot dev="hd"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <smbios mode="sysinfo"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   </os>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <features>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <acpi/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <apic/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <vmcoreinfo/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   </features>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <clock offset="utc">
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <timer name="pit" tickpolicy="delay"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <timer name="hpet" present="no"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   </clock>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <cpu mode="host-model" match="exact">
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <topology sockets="1" cores="1" threads="1"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   </cpu>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   <devices>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <disk type="network" device="disk">
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <driver type="raw" cache="none"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <source protocol="rbd" name="vms/b7e301b3-66f1-4486-b137-b94cb198342c_disk">
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <host name="192.168.122.100" port="6789"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       </source>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <auth username="openstack">
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <secret type="ceph" uuid="dcf60a89-bba0-58b0-a1bf-d4bde723201b"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       </auth>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <target dev="vda" bus="virtio"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <disk type="network" device="cdrom">
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <driver type="raw" cache="none"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <source protocol="rbd" name="vms/b7e301b3-66f1-4486-b137-b94cb198342c_disk.config">
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <host name="192.168.122.100" port="6789"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       </source>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <auth username="openstack">
Dec 01 20:54:03 compute-0 nova_compute[244568]:         <secret type="ceph" uuid="dcf60a89-bba0-58b0-a1bf-d4bde723201b"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       </auth>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <target dev="sda" bus="sata"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </disk>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <serial type="pty">
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <log file="/var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/console.log" append="off"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </serial>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <video>
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <model type="virtio"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </video>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <input type="tablet" bus="usb"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <rng model="virtio">
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <backend model="random">/dev/urandom</backend>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </rng>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="pci" model="pcie-root-port"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <controller type="usb" index="0"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     <memballoon model="virtio">
Dec 01 20:54:03 compute-0 nova_compute[244568]:       <stats period="10"/>
Dec 01 20:54:03 compute-0 nova_compute[244568]:     </memballoon>
Dec 01 20:54:03 compute-0 nova_compute[244568]:   </devices>
Dec 01 20:54:03 compute-0 nova_compute[244568]: </domain>
Dec 01 20:54:03 compute-0 nova_compute[244568]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.218 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.219 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.219 244572 INFO nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Using config drive
Dec 01 20:54:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 01 20:54:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 01 20:54:03 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.254 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image b7e301b3-66f1-4486-b137-b94cb198342c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:54:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3095520653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:54:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3095520653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:54:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2975773296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:54:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2619806067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 01 20:54:03 compute-0 ceph-mon[75880]: osdmap e98: 3 total, 3 up, 3 in
Dec 01 20:54:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:54:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:54:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:54:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:54:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:54:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.394 244572 INFO nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Creating config drive at /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/disk.config
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.398 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfe_vpemo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.521 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfe_vpemo" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.541 244572 DEBUG nova.storage.rbd_utils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] rbd image b7e301b3-66f1-4486-b137-b94cb198342c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.544 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/disk.config b7e301b3-66f1-4486-b137-b94cb198342c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.733 244572 DEBUG oslo_concurrency.processutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/disk.config b7e301b3-66f1-4486-b137-b94cb198342c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:54:03 compute-0 nova_compute[244568]: 2025-12-01 20:54:03.733 244572 INFO nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Deleting local config drive /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c/disk.config because it was imported into RBD.
Dec 01 20:54:03 compute-0 systemd-machined[207098]: New machine qemu-2-instance-00000002.
Dec 01 20:54:03 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 01 20:54:04 compute-0 ceph-mon[75880]: pgmap v763: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 8.0 KiB/s wr, 183 op/s
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.275 244572 DEBUG nova.virt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Emitting event <LifecycleEvent: 1764622444.274444, b7e301b3-66f1-4486-b137-b94cb198342c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.276 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] VM Resumed (Lifecycle Event)
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.279 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.279 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.284 244572 INFO nova.virt.libvirt.driver [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Instance spawned successfully.
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.284 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.305 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.308 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.309 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.309 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.309 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.310 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.310 244572 DEBUG nova.virt.libvirt.driver [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.313 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.351 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.351 244572 DEBUG nova.virt.driver [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] Emitting event <LifecycleEvent: 1764622444.2755725, b7e301b3-66f1-4486-b137-b94cb198342c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.351 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] VM Started (Lifecycle Event)
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.376 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.379 244572 DEBUG nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.384 244572 INFO nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Took 3.07 seconds to spawn the instance on the hypervisor.
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.384 244572 DEBUG nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.409 244572 INFO nova.compute.manager [None req-691293d4-1fcc-46eb-80a6-413c0df961c1 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.444 244572 INFO nova.compute.manager [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Took 4.04 seconds to build instance.
Dec 01 20:54:04 compute-0 nova_compute[244568]: 2025-12-01 20:54:04.460 244572 DEBUG oslo_concurrency.lockutils [None req-daece803-44f7-4506-90bb-c079b083478b 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b7e301b3-66f1-4486-b137-b94cb198342c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.9 KiB/s wr, 111 op/s
Dec 01 20:54:06 compute-0 ceph-mon[75880]: pgmap v765: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.9 KiB/s wr, 111 op/s
Dec 01 20:54:06 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:54:06.669 155855 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:ee:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2e:39:ea:af:48:04'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 01 20:54:06 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:54:06.671 155855 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 01 20:54:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.8 KiB/s wr, 86 op/s
Dec 01 20:54:07 compute-0 nova_compute[244568]: 2025-12-01 20:54:07.911 244572 DEBUG nova.compute.manager [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:54:07 compute-0 nova_compute[244568]: 2025-12-01 20:54:07.974 244572 INFO nova.compute.manager [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] instance snapshotting
Dec 01 20:54:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 01 20:54:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 01 20:54:08 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 01 20:54:08 compute-0 ceph-mon[75880]: pgmap v766: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.8 KiB/s wr, 86 op/s
Dec 01 20:54:08 compute-0 ceph-mon[75880]: osdmap e99: 3 total, 3 up, 3 in
Dec 01 20:54:08 compute-0 nova_compute[244568]: 2025-12-01 20:54:08.393 244572 INFO nova.virt.libvirt.driver [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Beginning live snapshot process
Dec 01 20:54:08 compute-0 nova_compute[244568]: 2025-12-01 20:54:08.551 244572 DEBUG nova.storage.rbd_utils [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] creating snapshot(db8c6c23d7d0447687916ff48b1a261f) on rbd image(b7e301b3-66f1-4486-b137-b94cb198342c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 01 20:54:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 21 KiB/s wr, 72 op/s
Dec 01 20:54:09 compute-0 sudo[248976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:54:09 compute-0 sudo[248976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:09 compute-0 sudo[248976]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:09 compute-0 sudo[249001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 01 20:54:09 compute-0 sudo[249001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.329504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622449329649, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1635, "num_deletes": 254, "total_data_size": 1702779, "memory_usage": 1737120, "flush_reason": "Manual Compaction"}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 01 20:54:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622449338398, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1665520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14268, "largest_seqno": 15902, "table_properties": {"data_size": 1657646, "index_size": 4821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16019, "raw_average_key_size": 20, "raw_value_size": 1641880, "raw_average_value_size": 2088, "num_data_blocks": 217, "num_entries": 786, "num_filter_entries": 786, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764622329, "oldest_key_time": 1764622329, "file_creation_time": 1764622449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 8818 microseconds, and 3756 cpu microseconds.
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.338433) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1665520 bytes OK
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.338451) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.340061) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.340073) EVENT_LOG_v1 {"time_micros": 1764622449340069, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.340084) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1695556, prev total WAL file size 1695597, number of live WAL files 2.
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.340755) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1626KB)], [35(5383KB)]
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622449340808, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 7177734, "oldest_snapshot_seqno": -1}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3615 keys, 5961059 bytes, temperature: kUnknown
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622449395442, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5961059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5933332, "index_size": 17569, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9093, "raw_key_size": 85622, "raw_average_key_size": 23, "raw_value_size": 5864826, "raw_average_value_size": 1622, "num_data_blocks": 754, "num_entries": 3615, "num_filter_entries": 3615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.395889) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5961059 bytes
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.397172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.8 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 5.3 +0.0 blob) out(5.7 +0.0 blob), read-write-amplify(7.9) write-amplify(3.6) OK, records in: 4136, records dropped: 521 output_compression: NoCompression
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.397234) EVENT_LOG_v1 {"time_micros": 1764622449397218, "job": 16, "event": "compaction_finished", "compaction_time_micros": 54860, "compaction_time_cpu_micros": 24022, "output_level": 6, "num_output_files": 1, "total_output_size": 5961059, "num_input_records": 4136, "num_output_records": 3615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622449397744, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622449399588, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.340691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.399627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.399633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.399636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.399639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:54:09 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:54:09.399641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:54:09 compute-0 nova_compute[244568]: 2025-12-01 20:54:09.407 244572 DEBUG nova.storage.rbd_utils [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] cloning vms/b7e301b3-66f1-4486-b137-b94cb198342c_disk@db8c6c23d7d0447687916ff48b1a261f to images/174ae210-ee9d-4985-af0a-b6b0201e2039 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 01 20:54:09 compute-0 nova_compute[244568]: 2025-12-01 20:54:09.587 244572 DEBUG nova.storage.rbd_utils [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] flattening images/174ae210-ee9d-4985-af0a-b6b0201e2039 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 01 20:54:09 compute-0 podman[249105]: 2025-12-01 20:54:09.65474357 +0000 UTC m=+0.065742829 container exec 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:54:09 compute-0 nova_compute[244568]: 2025-12-01 20:54:09.877 244572 DEBUG nova.storage.rbd_utils [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] removing snapshot(db8c6c23d7d0447687916ff48b1a261f) on rbd image(b7e301b3-66f1-4486-b137-b94cb198342c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 01 20:54:09 compute-0 podman[249143]: 2025-12-01 20:54:09.877922983 +0000 UTC m=+0.130483766 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:54:09 compute-0 podman[249105]: 2025-12-01 20:54:09.883499997 +0000 UTC m=+0.294499256 container exec_died 4df6a7b208c73e439d6c0e5db16b4367dd856c21847beb7cb18fc52390a379da (image=quay.io/ceph/ceph:v20, name=ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:54:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 01 20:54:10 compute-0 ceph-mon[75880]: pgmap v768: 177 pgs: 177 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 21 KiB/s wr, 72 op/s
Dec 01 20:54:10 compute-0 ceph-mon[75880]: osdmap e100: 3 total, 3 up, 3 in
Dec 01 20:54:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 01 20:54:10 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 01 20:54:10 compute-0 nova_compute[244568]: 2025-12-01 20:54:10.430 244572 DEBUG nova.storage.rbd_utils [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] creating snapshot(snap) on rbd image(174ae210-ee9d-4985-af0a-b6b0201e2039) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 01 20:54:10 compute-0 sudo[249001]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:54:10 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:54:10 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:10 compute-0 sudo[249327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:54:10 compute-0 sudo[249327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:10 compute-0 sudo[249327]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:10 compute-0 sudo[249352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:54:10 compute-0 sudo[249352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 25 KiB/s wr, 26 op/s
Dec 01 20:54:11 compute-0 sudo[249352]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:54:11 compute-0 sudo[249408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:54:11 compute-0 sudo[249408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:11 compute-0 sudo[249408]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 01 20:54:11 compute-0 sudo[249433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:54:11 compute-0 sudo[249433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:11 compute-0 ceph-mon[75880]: osdmap e101: 3 total, 3 up, 3 in
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:54:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 01 20:54:11 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 01 20:54:11 compute-0 podman[249470]: 2025-12-01 20:54:11.844626132 +0000 UTC m=+0.063329094 container create 007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:54:11 compute-0 systemd[1]: Started libpod-conmon-007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5.scope.
Dec 01 20:54:11 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:54:11 compute-0 podman[249470]: 2025-12-01 20:54:11.808368583 +0000 UTC m=+0.027071595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:54:11 compute-0 podman[249470]: 2025-12-01 20:54:11.943003227 +0000 UTC m=+0.161706229 container init 007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:54:11 compute-0 podman[249470]: 2025-12-01 20:54:11.950076207 +0000 UTC m=+0.168779169 container start 007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:54:11 compute-0 objective_euler[249486]: 167 167
Dec 01 20:54:11 compute-0 systemd[1]: libpod-007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5.scope: Deactivated successfully.
Dec 01 20:54:11 compute-0 podman[249470]: 2025-12-01 20:54:11.983676414 +0000 UTC m=+0.202379376 container attach 007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_euler, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:54:11 compute-0 podman[249470]: 2025-12-01 20:54:11.983937122 +0000 UTC m=+0.202640084 container died 007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a78aa96101c77e2a3da0816fc0ef1581db3ddf07a89eee034ba50e0ec9a2d4-merged.mount: Deactivated successfully.
Dec 01 20:54:12 compute-0 podman[249470]: 2025-12-01 20:54:12.020994867 +0000 UTC m=+0.239697829 container remove 007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:54:12 compute-0 systemd[1]: libpod-conmon-007d1c76a0e6a7d0e6a8400951356f4db070988d6459555fc5db384707601ff5.scope: Deactivated successfully.
Dec 01 20:54:12 compute-0 podman[249512]: 2025-12-01 20:54:12.275730593 +0000 UTC m=+0.098988986 container create c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:54:12 compute-0 podman[249512]: 2025-12-01 20:54:12.200477758 +0000 UTC m=+0.023736221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:54:12 compute-0 systemd[1]: Started libpod-conmon-c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa.scope.
Dec 01 20:54:12 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc47a03991b08818ce61f30df10b97367a6d3c5ad55ed7c66d2331f89784a8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc47a03991b08818ce61f30df10b97367a6d3c5ad55ed7c66d2331f89784a8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc47a03991b08818ce61f30df10b97367a6d3c5ad55ed7c66d2331f89784a8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc47a03991b08818ce61f30df10b97367a6d3c5ad55ed7c66d2331f89784a8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bc47a03991b08818ce61f30df10b97367a6d3c5ad55ed7c66d2331f89784a8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:12 compute-0 podman[249512]: 2025-12-01 20:54:12.513164329 +0000 UTC m=+0.336422722 container init c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_davinci, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 01 20:54:12 compute-0 podman[249512]: 2025-12-01 20:54:12.521658104 +0000 UTC m=+0.344916447 container start c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_davinci, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:54:12 compute-0 podman[249512]: 2025-12-01 20:54:12.525568406 +0000 UTC m=+0.348826779 container attach c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_davinci, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 01 20:54:12 compute-0 ceph-mon[75880]: pgmap v771: 177 pgs: 177 active+clean; 42 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 25 KiB/s wr, 26 op/s
Dec 01 20:54:12 compute-0 ceph-mon[75880]: osdmap e102: 3 total, 3 up, 3 in
Dec 01 20:54:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 5.6 KiB/s wr, 105 op/s
Dec 01 20:54:12 compute-0 clever_davinci[249529]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:54:12 compute-0 clever_davinci[249529]: --> All data devices are unavailable
Dec 01 20:54:13 compute-0 systemd[1]: libpod-c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa.scope: Deactivated successfully.
Dec 01 20:54:13 compute-0 podman[249549]: 2025-12-01 20:54:13.061960736 +0000 UTC m=+0.022152331 container died c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bc47a03991b08818ce61f30df10b97367a6d3c5ad55ed7c66d2331f89784a8b-merged.mount: Deactivated successfully.
Dec 01 20:54:13 compute-0 podman[249549]: 2025-12-01 20:54:13.165635636 +0000 UTC m=+0.125827261 container remove c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 01 20:54:13 compute-0 systemd[1]: libpod-conmon-c826158ccfccfd81cdc971bc894582539759153812ede6d5615dd8184d8ea7fa.scope: Deactivated successfully.
Dec 01 20:54:13 compute-0 sudo[249433]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:13 compute-0 sudo[249564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:54:13 compute-0 sudo[249564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:13 compute-0 sudo[249564]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:13 compute-0 sudo[249589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:54:13 compute-0 sudo[249589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:13 compute-0 podman[249626]: 2025-12-01 20:54:13.605749547 +0000 UTC m=+0.026210128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:54:13 compute-0 podman[249626]: 2025-12-01 20:54:13.700929272 +0000 UTC m=+0.121389793 container create 1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 01 20:54:13 compute-0 ceph-mon[75880]: pgmap v773: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 5.6 KiB/s wr, 105 op/s
Dec 01 20:54:13 compute-0 systemd[1]: Started libpod-conmon-1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9.scope.
Dec 01 20:54:13 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:54:13 compute-0 nova_compute[244568]: 2025-12-01 20:54:13.812 244572 INFO nova.virt.libvirt.driver [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Snapshot image upload complete
Dec 01 20:54:13 compute-0 nova_compute[244568]: 2025-12-01 20:54:13.815 244572 INFO nova.compute.manager [None req-970cde97-a210-4be2-8d28-65b8fb2cc4e7 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Took 5.84 seconds to snapshot the instance on the hypervisor.
Dec 01 20:54:13 compute-0 podman[249626]: 2025-12-01 20:54:13.966998211 +0000 UTC m=+0.387458752 container init 1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 01 20:54:13 compute-0 podman[249626]: 2025-12-01 20:54:13.978502779 +0000 UTC m=+0.398963330 container start 1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:54:13 compute-0 dreamy_panini[249642]: 167 167
Dec 01 20:54:13 compute-0 systemd[1]: libpod-1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9.scope: Deactivated successfully.
Dec 01 20:54:14 compute-0 podman[249626]: 2025-12-01 20:54:14.020604961 +0000 UTC m=+0.441065572 container attach 1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:54:14 compute-0 podman[249626]: 2025-12-01 20:54:14.021329373 +0000 UTC m=+0.441789894 container died 1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b83519db55a2bfe0754dcc962a6b36215aecec9ed754a3b1ec6cd2837f81b8b-merged.mount: Deactivated successfully.
Dec 01 20:54:14 compute-0 podman[249626]: 2025-12-01 20:54:14.175375062 +0000 UTC m=+0.595835613 container remove 1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:54:14 compute-0 systemd[1]: libpod-conmon-1c726a546f81708505af1977a777a300f668fa65cc88b743b56dc1ecbdabe9f9.scope: Deactivated successfully.
Dec 01 20:54:14 compute-0 podman[249668]: 2025-12-01 20:54:14.37528458 +0000 UTC m=+0.067890156 container create 12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_agnesi, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:54:14 compute-0 systemd[1]: Started libpod-conmon-12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c.scope.
Dec 01 20:54:14 compute-0 podman[249668]: 2025-12-01 20:54:14.34542193 +0000 UTC m=+0.038027596 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:54:14 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a127b58c65f248285ad8d151caade18582f6c28e55a1b0380c3ce42439e93fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a127b58c65f248285ad8d151caade18582f6c28e55a1b0380c3ce42439e93fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a127b58c65f248285ad8d151caade18582f6c28e55a1b0380c3ce42439e93fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a127b58c65f248285ad8d151caade18582f6c28e55a1b0380c3ce42439e93fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:14 compute-0 podman[249668]: 2025-12-01 20:54:14.465948394 +0000 UTC m=+0.158554010 container init 12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:54:14 compute-0 podman[249668]: 2025-12-01 20:54:14.480813457 +0000 UTC m=+0.173419023 container start 12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:54:14 compute-0 podman[249668]: 2025-12-01 20:54:14.485531694 +0000 UTC m=+0.178137260 container attach 12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:54:14 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:54:14.673 155855 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=84a1d907-d341-4608-b17a-1f738619ea16, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]: {
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:     "0": [
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:         {
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "devices": [
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "/dev/loop3"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             ],
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_name": "ceph_lv0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_size": "21470642176",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "name": "ceph_lv0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "tags": {
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cluster_name": "ceph",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.crush_device_class": "",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.encrypted": "0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.objectstore": "bluestore",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osd_id": "0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.type": "block",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.vdo": "0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.with_tpm": "0"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             },
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "type": "block",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "vg_name": "ceph_vg0"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:         }
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:     ],
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:     "1": [
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:         {
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "devices": [
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "/dev/loop4"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             ],
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_name": "ceph_lv1",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_size": "21470642176",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "name": "ceph_lv1",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "tags": {
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cluster_name": "ceph",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.crush_device_class": "",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.encrypted": "0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.objectstore": "bluestore",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osd_id": "1",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.type": "block",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.vdo": "0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.with_tpm": "0"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             },
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "type": "block",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "vg_name": "ceph_vg1"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:         }
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:     ],
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:     "2": [
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:         {
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "devices": [
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "/dev/loop5"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             ],
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_name": "ceph_lv2",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_size": "21470642176",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "name": "ceph_lv2",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "tags": {
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.cluster_name": "ceph",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.crush_device_class": "",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.encrypted": "0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.objectstore": "bluestore",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osd_id": "2",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.type": "block",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.vdo": "0",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:                 "ceph.with_tpm": "0"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             },
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "type": "block",
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:             "vg_name": "ceph_vg2"
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:         }
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]:     ]
Dec 01 20:54:14 compute-0 jolly_agnesi[249685]: }
Dec 01 20:54:14 compute-0 systemd[1]: libpod-12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c.scope: Deactivated successfully.
Dec 01 20:54:14 compute-0 podman[249694]: 2025-12-01 20:54:14.855868071 +0000 UTC m=+0.025354301 container died 12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 01 20:54:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.0 KiB/s wr, 110 op/s
Dec 01 20:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a127b58c65f248285ad8d151caade18582f6c28e55a1b0380c3ce42439e93fc-merged.mount: Deactivated successfully.
Dec 01 20:54:14 compute-0 podman[249694]: 2025-12-01 20:54:14.897326413 +0000 UTC m=+0.066812623 container remove 12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_agnesi, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:54:14 compute-0 systemd[1]: libpod-conmon-12956e920e76b4f7ec9d1f5029532084fbbd47950ac467979134b4537a5df93c.scope: Deactivated successfully.
Dec 01 20:54:14 compute-0 sudo[249589]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:15 compute-0 sudo[249709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:54:15 compute-0 sudo[249709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:15 compute-0 sudo[249709]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:15 compute-0 sudo[249734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:54:15 compute-0 sudo[249734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:15 compute-0 podman[249771]: 2025-12-01 20:54:15.394382898 +0000 UTC m=+0.025488605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:54:15 compute-0 podman[249771]: 2025-12-01 20:54:15.490619385 +0000 UTC m=+0.121725072 container create 2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_agnesi, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:54:15 compute-0 systemd[1]: Started libpod-conmon-2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3.scope.
Dec 01 20:54:15 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:54:15 compute-0 podman[249771]: 2025-12-01 20:54:15.565421576 +0000 UTC m=+0.196527283 container init 2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_agnesi, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:54:15 compute-0 podman[249771]: 2025-12-01 20:54:15.572384903 +0000 UTC m=+0.203490590 container start 2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_agnesi, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:54:15 compute-0 podman[249771]: 2025-12-01 20:54:15.576421429 +0000 UTC m=+0.207527126 container attach 2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:54:15 compute-0 sweet_agnesi[249787]: 167 167
Dec 01 20:54:15 compute-0 systemd[1]: libpod-2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3.scope: Deactivated successfully.
Dec 01 20:54:15 compute-0 podman[249771]: 2025-12-01 20:54:15.578947658 +0000 UTC m=+0.210053355 container died 2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_agnesi, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-802673153d6f941a8433b8a2bbf38d98b11cca22f04b0bde5bcba6421705e92d-merged.mount: Deactivated successfully.
Dec 01 20:54:15 compute-0 podman[249771]: 2025-12-01 20:54:15.615552348 +0000 UTC m=+0.246658035 container remove 2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_agnesi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:54:15 compute-0 systemd[1]: libpod-conmon-2728de489b62535f51b4e6d152a74b92f847d42447dfd7b3640a881299cd1ca3.scope: Deactivated successfully.
Dec 01 20:54:15 compute-0 podman[249811]: 2025-12-01 20:54:15.786020939 +0000 UTC m=+0.054438688 container create 1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gould, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:54:15 compute-0 systemd[1]: Started libpod-conmon-1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f.scope.
Dec 01 20:54:15 compute-0 podman[249811]: 2025-12-01 20:54:15.754799746 +0000 UTC m=+0.023217575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:54:15 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40dd519ebd20d05e5283c1ae20f81a539992605ae8d856d457a3270b16ae709/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40dd519ebd20d05e5283c1ae20f81a539992605ae8d856d457a3270b16ae709/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40dd519ebd20d05e5283c1ae20f81a539992605ae8d856d457a3270b16ae709/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40dd519ebd20d05e5283c1ae20f81a539992605ae8d856d457a3270b16ae709/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:54:15 compute-0 podman[249811]: 2025-12-01 20:54:15.879984345 +0000 UTC m=+0.148402114 container init 1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:54:15 compute-0 podman[249811]: 2025-12-01 20:54:15.895422057 +0000 UTC m=+0.163839806 container start 1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gould, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:54:15 compute-0 podman[249811]: 2025-12-01 20:54:15.898944647 +0000 UTC m=+0.167362456 container attach 1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:54:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 01 20:54:15 compute-0 ceph-mon[75880]: pgmap v774: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 5.0 KiB/s wr, 110 op/s
Dec 01 20:54:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 01 20:54:15 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.500 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "b7e301b3-66f1-4486-b137-b94cb198342c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.502 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b7e301b3-66f1-4486-b137-b94cb198342c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.502 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "b7e301b3-66f1-4486-b137-b94cb198342c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.502 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b7e301b3-66f1-4486-b137-b94cb198342c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.502 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b7e301b3-66f1-4486-b137-b94cb198342c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.503 244572 INFO nova.compute.manager [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Terminating instance
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.504 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "refresh_cache-b7e301b3-66f1-4486-b137-b94cb198342c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.504 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquired lock "refresh_cache-b7e301b3-66f1-4486-b137-b94cb198342c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 01 20:54:16 compute-0 nova_compute[244568]: 2025-12-01 20:54:16.504 244572 DEBUG nova.network.neutron [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 01 20:54:16 compute-0 lvm[249904]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:54:16 compute-0 lvm[249904]: VG ceph_vg0 finished
Dec 01 20:54:16 compute-0 lvm[249907]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:54:16 compute-0 lvm[249907]: VG ceph_vg1 finished
Dec 01 20:54:16 compute-0 lvm[249909]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:54:16 compute-0 lvm[249909]: VG ceph_vg2 finished
Dec 01 20:54:16 compute-0 agitated_gould[249828]: {}
Dec 01 20:54:16 compute-0 systemd[1]: libpod-1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f.scope: Deactivated successfully.
Dec 01 20:54:16 compute-0 systemd[1]: libpod-1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f.scope: Consumed 1.276s CPU time.
Dec 01 20:54:16 compute-0 podman[249811]: 2025-12-01 20:54:16.713003719 +0000 UTC m=+0.981421478 container died 1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d40dd519ebd20d05e5283c1ae20f81a539992605ae8d856d457a3270b16ae709-merged.mount: Deactivated successfully.
Dec 01 20:54:16 compute-0 podman[249811]: 2025-12-01 20:54:16.769860099 +0000 UTC m=+1.038277888 container remove 1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gould, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:54:16 compute-0 systemd[1]: libpod-conmon-1e39c35e4ac40f18ea521cd0823789625a500913b19922cee82a9328ce44686f.scope: Deactivated successfully.
Dec 01 20:54:16 compute-0 sudo[249734]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:54:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:16 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:54:16 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.6 KiB/s wr, 102 op/s
Dec 01 20:54:16 compute-0 sudo[249923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:54:16 compute-0 sudo[249923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:54:16 compute-0 sudo[249923]: pam_unix(sudo:session): session closed for user root
Dec 01 20:54:16 compute-0 ceph-mon[75880]: osdmap e103: 3 total, 3 up, 3 in
Dec 01 20:54:16 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:16 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:54:17 compute-0 nova_compute[244568]: 2025-12-01 20:54:17.320 244572 DEBUG nova.network.neutron [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 01 20:54:17 compute-0 nova_compute[244568]: 2025-12-01 20:54:17.544 244572 DEBUG nova.network.neutron [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 01 20:54:17 compute-0 nova_compute[244568]: 2025-12-01 20:54:17.561 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Releasing lock "refresh_cache-b7e301b3-66f1-4486-b137-b94cb198342c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 01 20:54:17 compute-0 nova_compute[244568]: 2025-12-01 20:54:17.562 244572 DEBUG nova.compute.manager [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 01 20:54:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 01 20:54:17 compute-0 systemd-machined[207098]: Machine qemu-2-instance-00000002 terminated.
Dec 01 20:54:17 compute-0 nova_compute[244568]: 2025-12-01 20:54:17.782 244572 INFO nova.virt.libvirt.driver [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Instance destroyed successfully.
Dec 01 20:54:17 compute-0 nova_compute[244568]: 2025-12-01 20:54:17.783 244572 DEBUG nova.objects.instance [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lazy-loading 'resources' on Instance uuid b7e301b3-66f1-4486-b137-b94cb198342c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 01 20:54:18 compute-0 ceph-mon[75880]: pgmap v776: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.6 KiB/s wr, 102 op/s
Dec 01 20:54:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 01 20:54:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 01 20:54:18 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 01 20:54:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 6.9 KiB/s wr, 175 op/s
Dec 01 20:54:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 01 20:54:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 01 20:54:19 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 01 20:54:19 compute-0 ceph-mon[75880]: osdmap e104: 3 total, 3 up, 3 in
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.441 244572 INFO nova.virt.libvirt.driver [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Deleting instance files /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c_del
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.442 244572 INFO nova.virt.libvirt.driver [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Deletion of /var/lib/nova/instances/b7e301b3-66f1-4486-b137-b94cb198342c_del complete
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.482 244572 DEBUG nova.virt.libvirt.host [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.482 244572 INFO nova.virt.libvirt.host [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] UEFI support detected
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.484 244572 INFO nova.compute.manager [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Took 1.92 seconds to destroy the instance on the hypervisor.
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.484 244572 DEBUG oslo.service.loopingcall [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.484 244572 DEBUG nova.compute.manager [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 01 20:54:19 compute-0 nova_compute[244568]: 2025-12-01 20:54:19.484 244572 DEBUG nova.network.neutron [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 01 20:54:20 compute-0 ceph-mon[75880]: pgmap v778: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 6.9 KiB/s wr, 175 op/s
Dec 01 20:54:20 compute-0 ceph-mon[75880]: osdmap e105: 3 total, 3 up, 3 in
Dec 01 20:54:20 compute-0 nova_compute[244568]: 2025-12-01 20:54:20.420 244572 DEBUG nova.network.neutron [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 01 20:54:20 compute-0 nova_compute[244568]: 2025-12-01 20:54:20.436 244572 DEBUG nova.network.neutron [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 01 20:54:20 compute-0 nova_compute[244568]: 2025-12-01 20:54:20.450 244572 INFO nova.compute.manager [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Took 0.97 seconds to deallocate network for instance.
Dec 01 20:54:20 compute-0 nova_compute[244568]: 2025-12-01 20:54:20.498 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:20 compute-0 nova_compute[244568]: 2025-12-01 20:54:20.499 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:20 compute-0 nova_compute[244568]: 2025-12-01 20:54:20.581 244572 DEBUG oslo_concurrency.processutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:54:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.3 KiB/s wr, 101 op/s
Dec 01 20:54:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:54:21 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483757563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:54:21 compute-0 nova_compute[244568]: 2025-12-01 20:54:21.117 244572 DEBUG oslo_concurrency.processutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:54:21 compute-0 nova_compute[244568]: 2025-12-01 20:54:21.126 244572 DEBUG nova.compute.provider_tree [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:54:21 compute-0 nova_compute[244568]: 2025-12-01 20:54:21.146 244572 DEBUG nova.scheduler.client.report [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:54:21 compute-0 nova_compute[244568]: 2025-12-01 20:54:21.170 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:21 compute-0 nova_compute[244568]: 2025-12-01 20:54:21.201 244572 INFO nova.scheduler.client.report [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Deleted allocations for instance b7e301b3-66f1-4486-b137-b94cb198342c
Dec 01 20:54:21 compute-0 nova_compute[244568]: 2025-12-01 20:54:21.284 244572 DEBUG oslo_concurrency.lockutils [None req-98fca7ff-83bd-4beb-9b61-6bd996a7e2fe 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "b7e301b3-66f1-4486-b137-b94cb198342c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:21 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1483757563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:54:22 compute-0 ceph-mon[75880]: pgmap v780: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 3.3 KiB/s wr, 101 op/s
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.643 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.643 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.644 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.644 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.645 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.647 244572 INFO nova.compute.manager [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Terminating instance
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.649 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "refresh_cache-96a11cae-1191-4376-947d-1d6c1d5e3e6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.649 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquired lock "refresh_cache-96a11cae-1191-4376-947d-1d6c1d5e3e6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 01 20:54:22 compute-0 nova_compute[244568]: 2025-12-01 20:54:22.650 244572 DEBUG nova.network.neutron [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 01 20:54:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 4.3 KiB/s wr, 125 op/s
Dec 01 20:54:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 01 20:54:23 compute-0 nova_compute[244568]: 2025-12-01 20:54:23.440 244572 DEBUG nova.network.neutron [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 01 20:54:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 01 20:54:23 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 01 20:54:23 compute-0 ceph-mon[75880]: pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 4.3 KiB/s wr, 125 op/s
Dec 01 20:54:24 compute-0 nova_compute[244568]: 2025-12-01 20:54:24.422 244572 DEBUG nova.network.neutron [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 01 20:54:24 compute-0 nova_compute[244568]: 2025-12-01 20:54:24.440 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Releasing lock "refresh_cache-96a11cae-1191-4376-947d-1d6c1d5e3e6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 01 20:54:24 compute-0 nova_compute[244568]: 2025-12-01 20:54:24.441 244572 DEBUG nova.compute.manager [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 01 20:54:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 01 20:54:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.074s CPU time.
Dec 01 20:54:24 compute-0 systemd-machined[207098]: Machine qemu-1-instance-00000001 terminated.
Dec 01 20:54:24 compute-0 podman[249991]: 2025-12-01 20:54:24.629049718 +0000 UTC m=+0.102145442 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:54:24 compute-0 nova_compute[244568]: 2025-12-01 20:54:24.667 244572 INFO nova.virt.libvirt.driver [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Instance destroyed successfully.
Dec 01 20:54:24 compute-0 nova_compute[244568]: 2025-12-01 20:54:24.668 244572 DEBUG nova.objects.instance [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lazy-loading 'resources' on Instance uuid 96a11cae-1191-4376-947d-1d6c1d5e3e6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 01 20:54:24 compute-0 nova_compute[244568]: 2025-12-01 20:54:24.850 244572 INFO nova.virt.libvirt.driver [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Deleting instance files /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d_del
Dec 01 20:54:24 compute-0 nova_compute[244568]: 2025-12-01 20:54:24.852 244572 INFO nova.virt.libvirt.driver [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Deletion of /var/lib/nova/instances/96a11cae-1191-4376-947d-1d6c1d5e3e6d_del complete
Dec 01 20:54:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.5 KiB/s wr, 84 op/s
Dec 01 20:54:24 compute-0 ceph-mon[75880]: osdmap e106: 3 total, 3 up, 3 in
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.035 244572 INFO nova.compute.manager [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Took 0.59 seconds to destroy the instance on the hypervisor.
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.036 244572 DEBUG oslo.service.loopingcall [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.036 244572 DEBUG nova.compute.manager [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.037 244572 DEBUG nova.network.neutron [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.420 244572 DEBUG nova.network.neutron [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.434 244572 DEBUG nova.network.neutron [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.451 244572 INFO nova.compute.manager [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Took 0.41 seconds to deallocate network for instance.
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.679 244572 INFO nova.compute.manager [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Took 0.23 seconds to detach 1 volumes for instance.
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.681 244572 DEBUG nova.compute.manager [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Deleting volume: cbad2b83-e395-4688-ac99-3302fd0459e1 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.868 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.868 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:25 compute-0 nova_compute[244568]: 2025-12-01 20:54:25.946 244572 DEBUG oslo_concurrency.processutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:54:25 compute-0 ceph-mon[75880]: pgmap v783: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.5 KiB/s wr, 84 op/s
Dec 01 20:54:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:54:26 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284644868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:54:26 compute-0 nova_compute[244568]: 2025-12-01 20:54:26.516 244572 DEBUG oslo_concurrency.processutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:54:26 compute-0 nova_compute[244568]: 2025-12-01 20:54:26.522 244572 DEBUG nova.compute.provider_tree [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:54:26 compute-0 nova_compute[244568]: 2025-12-01 20:54:26.549 244572 DEBUG nova.scheduler.client.report [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:54:26 compute-0 nova_compute[244568]: 2025-12-01 20:54:26.572 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:26 compute-0 nova_compute[244568]: 2025-12-01 20:54:26.597 244572 INFO nova.scheduler.client.report [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Deleted allocations for instance 96a11cae-1191-4376-947d-1d6c1d5e3e6d
Dec 01 20:54:26 compute-0 nova_compute[244568]: 2025-12-01 20:54:26.663 244572 DEBUG oslo_concurrency.lockutils [None req-f2174d69-7608-4d23-bd8b-cf148b4c9576 25f1ff3f1de64c54878ee8235abb222e 34d478e941f64c469a8c9150557e448e - - default default] Lock "96a11cae-1191-4376-947d-1d6c1d5e3e6d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Dec 01 20:54:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 01 20:54:26 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1284644868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:54:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 01 20:54:27 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 01 20:54:27 compute-0 podman[250055]: 2025-12-01 20:54:27.109026537 +0000 UTC m=+0.059785353 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 01 20:54:27 compute-0 podman[250056]: 2025-12-01 20:54:27.187318916 +0000 UTC m=+0.131510868 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec 01 20:54:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:54:27 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339704623' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:54:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:54:27 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2339704623' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:54:28 compute-0 ceph-mon[75880]: pgmap v784: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Dec 01 20:54:28 compute-0 ceph-mon[75880]: osdmap e107: 3 total, 3 up, 3 in
Dec 01 20:54:28 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2339704623' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:54:28 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2339704623' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:54:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 01 20:54:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 01 20:54:28 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 01 20:54:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 167 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.0 KiB/s wr, 84 op/s
Dec 01 20:54:29 compute-0 ceph-mon[75880]: osdmap e108: 3 total, 3 up, 3 in
Dec 01 20:54:29 compute-0 ceph-mon[75880]: pgmap v787: 177 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 167 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.0 KiB/s wr, 84 op/s
Dec 01 20:54:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 167 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.9 KiB/s wr, 58 op/s
Dec 01 20:54:31 compute-0 ceph-mon[75880]: pgmap v788: 177 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 167 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.9 KiB/s wr, 58 op/s
Dec 01 20:54:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:54:32
Dec 01 20:54:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:54:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:54:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', 'images']
Dec 01 20:54:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:54:32 compute-0 nova_compute[244568]: 2025-12-01 20:54:32.781 244572 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764622457.7807262, b7e301b3-66f1-4486-b137-b94cb198342c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 01 20:54:32 compute-0 nova_compute[244568]: 2025-12-01 20:54:32.782 244572 INFO nova.compute.manager [-] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] VM Stopped (Lifecycle Event)
Dec 01 20:54:32 compute-0 nova_compute[244568]: 2025-12-01 20:54:32.801 244572 DEBUG nova.compute.manager [None req-0c38a1c2-8fd6-417b-ae37-1aeb64d55b52 - - - - - -] [instance: b7e301b3-66f1-4486-b137-b94cb198342c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:54:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:54:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:54:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 01 20:54:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 01 20:54:33 compute-0 ceph-mon[75880]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 01 20:54:33 compute-0 ceph-mon[75880]: pgmap v789: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Dec 01 20:54:33 compute-0 ceph-mon[75880]: osdmap e109: 3 total, 3 up, 3 in
Dec 01 20:54:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 53 op/s
Dec 01 20:54:35 compute-0 ceph-mon[75880]: pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 53 op/s
Dec 01 20:54:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:38 compute-0 ceph-mon[75880]: pgmap v792: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:39 compute-0 nova_compute[244568]: 2025-12-01 20:54:39.665 244572 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764622464.6641223, 96a11cae-1191-4376-947d-1d6c1d5e3e6d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 01 20:54:39 compute-0 nova_compute[244568]: 2025-12-01 20:54:39.665 244572 INFO nova.compute.manager [-] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] VM Stopped (Lifecycle Event)
Dec 01 20:54:39 compute-0 nova_compute[244568]: 2025-12-01 20:54:39.690 244572 DEBUG nova.compute.manager [None req-be77080a-3be2-4546-87c4-7161a0a166bd - - - - - -] [instance: 96a11cae-1191-4376-947d-1d6c1d5e3e6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 01 20:54:40 compute-0 ceph-mon[75880]: pgmap v793: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:54:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:42 compute-0 ceph-mon[75880]: pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:44 compute-0 ceph-mon[75880]: pgmap v795: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:54:44.357 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:54:44.357 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:54:44.357 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:46 compute-0 ceph-mon[75880]: pgmap v796: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:48 compute-0 ceph-mon[75880]: pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:50 compute-0 ceph-mon[75880]: pgmap v798: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v799: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:52 compute-0 ceph-mon[75880]: pgmap v799: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:53 compute-0 nova_compute[244568]: 2025-12-01 20:54:53.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:54 compute-0 ceph-mon[75880]: pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v801: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:54 compute-0 nova_compute[244568]: 2025-12-01 20:54:54.953 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:55 compute-0 podman[250097]: 2025-12-01 20:54:55.091075513 +0000 UTC m=+0.052721203 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 01 20:54:56 compute-0 ceph-mon[75880]: pgmap v801: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:56 compute-0 nova_compute[244568]: 2025-12-01 20:54:56.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:56 compute-0 nova_compute[244568]: 2025-12-01 20:54:56.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:54:57 compute-0 nova_compute[244568]: 2025-12-01 20:54:57.953 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:57 compute-0 nova_compute[244568]: 2025-12-01 20:54:57.971 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:57 compute-0 nova_compute[244568]: 2025-12-01 20:54:57.972 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:57 compute-0 nova_compute[244568]: 2025-12-01 20:54:57.972 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:58 compute-0 podman[250117]: 2025-12-01 20:54:58.125169084 +0000 UTC m=+0.082998176 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 01 20:54:58 compute-0 podman[250118]: 2025-12-01 20:54:58.150120812 +0000 UTC m=+0.096526548 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:54:58 compute-0 ceph-mon[75880]: pgmap v802: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:54:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v803: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:54:59 compute-0 nova_compute[244568]: 2025-12-01 20:54:59.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:54:59 compute-0 nova_compute[244568]: 2025-12-01 20:54:59.984 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:54:59 compute-0 nova_compute[244568]: 2025-12-01 20:54:59.984 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:54:59 compute-0 nova_compute[244568]: 2025-12-01 20:54:59.985 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:54:59 compute-0 nova_compute[244568]: 2025-12-01 20:54:59.985 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:54:59 compute-0 nova_compute[244568]: 2025-12-01 20:54:59.985 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:55:00 compute-0 ceph-mon[75880]: pgmap v803: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:55:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712104695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.521 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.677 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.678 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.678 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.678 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.729 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.729 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:55:00 compute-0 nova_compute[244568]: 2025-12-01 20:55:00.743 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:55:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v804: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:55:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898399671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:55:01 compute-0 nova_compute[244568]: 2025-12-01 20:55:01.231 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:55:01 compute-0 nova_compute[244568]: 2025-12-01 20:55:01.237 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:55:01 compute-0 nova_compute[244568]: 2025-12-01 20:55:01.251 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:55:01 compute-0 nova_compute[244568]: 2025-12-01 20:55:01.274 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:55:01 compute-0 nova_compute[244568]: 2025-12-01 20:55:01.275 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:55:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3712104695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:55:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2898399671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:55:02 compute-0 nova_compute[244568]: 2025-12-01 20:55:02.276 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:55:02 compute-0 nova_compute[244568]: 2025-12-01 20:55:02.276 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:55:02 compute-0 nova_compute[244568]: 2025-12-01 20:55:02.276 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:55:02 compute-0 nova_compute[244568]: 2025-12-01 20:55:02.292 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:55:02 compute-0 nova_compute[244568]: 2025-12-01 20:55:02.293 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:55:02 compute-0 ceph-mon[75880]: pgmap v804: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:55:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2041072871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:55:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:55:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2041072871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:55:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:55:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:55:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:55:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:55:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:55:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:55:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2041072871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:55:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2041072871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:55:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:04 compute-0 ceph-mon[75880]: pgmap v805: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:06 compute-0 ceph-mon[75880]: pgmap v806: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:08 compute-0 ceph-mon[75880]: pgmap v807: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v808: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:09 compute-0 ceph-mon[75880]: pgmap v808: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:11 compute-0 ceph-mon[75880]: pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v810: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:13 compute-0 ceph-mon[75880]: pgmap v810: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:15 compute-0 ceph-mon[75880]: pgmap v811: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v812: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:16 compute-0 sudo[250205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:55:16 compute-0 sudo[250205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:17 compute-0 sudo[250205]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:17 compute-0 sudo[250230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:55:17 compute-0 sudo[250230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:17 compute-0 sudo[250230]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:55:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:55:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:55:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:55:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:55:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:55:17 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:55:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:55:17 compute-0 sudo[250286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:55:17 compute-0 sudo[250286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:17 compute-0 sudo[250286]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:17 compute-0 sudo[250311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:55:17 compute-0 sudo[250311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:17 compute-0 ceph-mon[75880]: pgmap v812: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:55:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:55:17 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:55:18 compute-0 podman[250347]: 2025-12-01 20:55:18.158566704 +0000 UTC m=+0.051988841 container create fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:55:18 compute-0 systemd[1]: Started libpod-conmon-fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f.scope.
Dec 01 20:55:18 compute-0 podman[250347]: 2025-12-01 20:55:18.131878392 +0000 UTC m=+0.025300529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:55:18 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:55:18 compute-0 podman[250347]: 2025-12-01 20:55:18.254711079 +0000 UTC m=+0.148133186 container init fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:55:18 compute-0 podman[250347]: 2025-12-01 20:55:18.26534899 +0000 UTC m=+0.158771137 container start fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 01 20:55:18 compute-0 podman[250347]: 2025-12-01 20:55:18.269344625 +0000 UTC m=+0.162766762 container attach fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 01 20:55:18 compute-0 quizzical_keldysh[250363]: 167 167
Dec 01 20:55:18 compute-0 systemd[1]: libpod-fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f.scope: Deactivated successfully.
Dec 01 20:55:18 compute-0 podman[250347]: 2025-12-01 20:55:18.271566184 +0000 UTC m=+0.164988291 container died fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-20fea038a092047c8b0df2a7522116c66816a3e3eedce492d47c93f4237aac46-merged.mount: Deactivated successfully.
Dec 01 20:55:18 compute-0 podman[250347]: 2025-12-01 20:55:18.31827906 +0000 UTC m=+0.211701167 container remove fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:55:18 compute-0 systemd[1]: libpod-conmon-fea48f1f32e21e6dad0409161047672f79102184e2a0004ba246732a577c414f.scope: Deactivated successfully.
Dec 01 20:55:18 compute-0 podman[250389]: 2025-12-01 20:55:18.465483715 +0000 UTC m=+0.038014905 container create f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 01 20:55:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:18 compute-0 systemd[1]: Started libpod-conmon-f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be.scope.
Dec 01 20:55:18 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ad038938a88eb44871eaa8785f4760394e7501b26a583d8ae8735fd60654cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ad038938a88eb44871eaa8785f4760394e7501b26a583d8ae8735fd60654cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ad038938a88eb44871eaa8785f4760394e7501b26a583d8ae8735fd60654cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ad038938a88eb44871eaa8785f4760394e7501b26a583d8ae8735fd60654cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ad038938a88eb44871eaa8785f4760394e7501b26a583d8ae8735fd60654cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:18 compute-0 podman[250389]: 2025-12-01 20:55:18.53208789 +0000 UTC m=+0.104619080 container init f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bell, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 20:55:18 compute-0 podman[250389]: 2025-12-01 20:55:18.538957864 +0000 UTC m=+0.111489054 container start f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:55:18 compute-0 podman[250389]: 2025-12-01 20:55:18.541950357 +0000 UTC m=+0.114481547 container attach f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:55:18 compute-0 podman[250389]: 2025-12-01 20:55:18.449787966 +0000 UTC m=+0.022319176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:55:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:18 compute-0 strange_bell[250405]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:55:18 compute-0 strange_bell[250405]: --> All data devices are unavailable
Dec 01 20:55:19 compute-0 systemd[1]: libpod-f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be.scope: Deactivated successfully.
Dec 01 20:55:19 compute-0 podman[250425]: 2025-12-01 20:55:19.062405961 +0000 UTC m=+0.028843120 container died f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 01 20:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-41ad038938a88eb44871eaa8785f4760394e7501b26a583d8ae8735fd60654cd-merged.mount: Deactivated successfully.
Dec 01 20:55:19 compute-0 podman[250425]: 2025-12-01 20:55:19.102789359 +0000 UTC m=+0.069226498 container remove f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 01 20:55:19 compute-0 systemd[1]: libpod-conmon-f09c7e01c2949526cae54f69df07862081ff5e92b506e65fad7edc163064b4be.scope: Deactivated successfully.
Dec 01 20:55:19 compute-0 sudo[250311]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:19 compute-0 sudo[250440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:55:19 compute-0 sudo[250440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:19 compute-0 sudo[250440]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:19 compute-0 sudo[250465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:55:19 compute-0 sudo[250465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:19 compute-0 podman[250502]: 2025-12-01 20:55:19.530997998 +0000 UTC m=+0.044330961 container create c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 01 20:55:19 compute-0 systemd[1]: Started libpod-conmon-c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557.scope.
Dec 01 20:55:19 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:55:19 compute-0 podman[250502]: 2025-12-01 20:55:19.506598539 +0000 UTC m=+0.019931542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:55:19 compute-0 podman[250502]: 2025-12-01 20:55:19.609845295 +0000 UTC m=+0.123178268 container init c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_archimedes, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:55:19 compute-0 podman[250502]: 2025-12-01 20:55:19.621397585 +0000 UTC m=+0.134730568 container start c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:55:19 compute-0 flamboyant_archimedes[250518]: 167 167
Dec 01 20:55:19 compute-0 podman[250502]: 2025-12-01 20:55:19.625193054 +0000 UTC m=+0.138526027 container attach c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_archimedes, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:55:19 compute-0 systemd[1]: libpod-c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557.scope: Deactivated successfully.
Dec 01 20:55:19 compute-0 podman[250502]: 2025-12-01 20:55:19.625809533 +0000 UTC m=+0.139142536 container died c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d929833b48f4a458a9129e352515d9f84f14fcb34ad0f456e14289c00604e041-merged.mount: Deactivated successfully.
Dec 01 20:55:19 compute-0 podman[250502]: 2025-12-01 20:55:19.668651537 +0000 UTC m=+0.181984510 container remove c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:55:19 compute-0 systemd[1]: libpod-conmon-c61dd4fd43d5e206a59a0f05c87d5c129d3ef7dea2028f8d5297be3c7e201557.scope: Deactivated successfully.
Dec 01 20:55:19 compute-0 podman[250542]: 2025-12-01 20:55:19.847027154 +0000 UTC m=+0.044442085 container create 2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:55:19 compute-0 systemd[1]: Started libpod-conmon-2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018.scope.
Dec 01 20:55:19 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae84ae8d0f6d9649a9fc87bf720c76b6bc90efae1e987e7c1f64dded370156fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae84ae8d0f6d9649a9fc87bf720c76b6bc90efae1e987e7c1f64dded370156fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae84ae8d0f6d9649a9fc87bf720c76b6bc90efae1e987e7c1f64dded370156fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae84ae8d0f6d9649a9fc87bf720c76b6bc90efae1e987e7c1f64dded370156fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:19 compute-0 podman[250542]: 2025-12-01 20:55:19.91844398 +0000 UTC m=+0.115858931 container init 2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:55:19 compute-0 podman[250542]: 2025-12-01 20:55:19.830886512 +0000 UTC m=+0.028301473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:55:19 compute-0 podman[250542]: 2025-12-01 20:55:19.927305287 +0000 UTC m=+0.124720238 container start 2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cori, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:55:19 compute-0 podman[250542]: 2025-12-01 20:55:19.931262829 +0000 UTC m=+0.128677780 container attach 2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cori, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:55:19 compute-0 ceph-mon[75880]: pgmap v813: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:20 compute-0 vigorous_cori[250558]: {
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:     "0": [
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:         {
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "devices": [
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "/dev/loop3"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             ],
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_name": "ceph_lv0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_size": "21470642176",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "name": "ceph_lv0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "tags": {
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cluster_name": "ceph",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.crush_device_class": "",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.encrypted": "0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.objectstore": "bluestore",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osd_id": "0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.type": "block",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.vdo": "0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.with_tpm": "0"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             },
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "type": "block",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "vg_name": "ceph_vg0"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:         }
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:     ],
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:     "1": [
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:         {
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "devices": [
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "/dev/loop4"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             ],
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_name": "ceph_lv1",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_size": "21470642176",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "name": "ceph_lv1",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "tags": {
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cluster_name": "ceph",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.crush_device_class": "",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.encrypted": "0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.objectstore": "bluestore",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osd_id": "1",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.type": "block",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.vdo": "0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.with_tpm": "0"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             },
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "type": "block",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "vg_name": "ceph_vg1"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:         }
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:     ],
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:     "2": [
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:         {
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "devices": [
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "/dev/loop5"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             ],
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_name": "ceph_lv2",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_size": "21470642176",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "name": "ceph_lv2",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "tags": {
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.cluster_name": "ceph",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.crush_device_class": "",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.encrypted": "0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.objectstore": "bluestore",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osd_id": "2",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.type": "block",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.vdo": "0",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:                 "ceph.with_tpm": "0"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             },
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "type": "block",
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:             "vg_name": "ceph_vg2"
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:         }
Dec 01 20:55:20 compute-0 vigorous_cori[250558]:     ]
Dec 01 20:55:20 compute-0 vigorous_cori[250558]: }
Dec 01 20:55:20 compute-0 systemd[1]: libpod-2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018.scope: Deactivated successfully.
Dec 01 20:55:20 compute-0 conmon[250558]: conmon 2f0babad3d5cf3963723 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018.scope/container/memory.events
Dec 01 20:55:20 compute-0 podman[250542]: 2025-12-01 20:55:20.234376133 +0000 UTC m=+0.431791064 container died 2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cori, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae84ae8d0f6d9649a9fc87bf720c76b6bc90efae1e987e7c1f64dded370156fa-merged.mount: Deactivated successfully.
Dec 01 20:55:20 compute-0 podman[250542]: 2025-12-01 20:55:20.272062517 +0000 UTC m=+0.469477448 container remove 2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:55:20 compute-0 systemd[1]: libpod-conmon-2f0babad3d5cf3963723732adf9fe5b7f8463058a9f929e9689d027e698ae018.scope: Deactivated successfully.
Dec 01 20:55:20 compute-0 sudo[250465]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:20 compute-0 sudo[250578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:55:20 compute-0 sudo[250578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:20 compute-0 sudo[250578]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:20 compute-0 sudo[250603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:55:20 compute-0 sudo[250603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:20 compute-0 podman[250640]: 2025-12-01 20:55:20.720026312 +0000 UTC m=+0.041311598 container create f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_northcutt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 20:55:20 compute-0 systemd[1]: Started libpod-conmon-f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921.scope.
Dec 01 20:55:20 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:55:20 compute-0 podman[250640]: 2025-12-01 20:55:20.784041637 +0000 UTC m=+0.105326943 container init f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_northcutt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:55:20 compute-0 podman[250640]: 2025-12-01 20:55:20.789785276 +0000 UTC m=+0.111070572 container start f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_northcutt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:55:20 compute-0 laughing_northcutt[250656]: 167 167
Dec 01 20:55:20 compute-0 podman[250640]: 2025-12-01 20:55:20.793160181 +0000 UTC m=+0.114445487 container attach f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_northcutt, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:55:20 compute-0 systemd[1]: libpod-f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921.scope: Deactivated successfully.
Dec 01 20:55:20 compute-0 podman[250640]: 2025-12-01 20:55:20.79411934 +0000 UTC m=+0.115404636 container died f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 20:55:20 compute-0 podman[250640]: 2025-12-01 20:55:20.704775557 +0000 UTC m=+0.026060873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c8f4b8139fa628a2e5d1135b3390971bb836aee3974559190ff858c8e29a35-merged.mount: Deactivated successfully.
Dec 01 20:55:20 compute-0 podman[250640]: 2025-12-01 20:55:20.830161603 +0000 UTC m=+0.151446899 container remove f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 20:55:20 compute-0 systemd[1]: libpod-conmon-f9b4f2111716f0f80ef4950393ba03c669451cc07da8c02d983a9979de847921.scope: Deactivated successfully.
Dec 01 20:55:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:21 compute-0 podman[250681]: 2025-12-01 20:55:21.025637822 +0000 UTC m=+0.049358668 container create b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:55:21 compute-0 systemd[1]: Started libpod-conmon-b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2.scope.
Dec 01 20:55:21 compute-0 podman[250681]: 2025-12-01 20:55:21.012355639 +0000 UTC m=+0.036076505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:55:21 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678ce0a0a504ff62b4eefbd1937db953c64c6042f803c6c3aa73e6e124c2a056/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678ce0a0a504ff62b4eefbd1937db953c64c6042f803c6c3aa73e6e124c2a056/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678ce0a0a504ff62b4eefbd1937db953c64c6042f803c6c3aa73e6e124c2a056/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678ce0a0a504ff62b4eefbd1937db953c64c6042f803c6c3aa73e6e124c2a056/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:55:21 compute-0 podman[250681]: 2025-12-01 20:55:21.127999582 +0000 UTC m=+0.151720518 container init b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 01 20:55:21 compute-0 podman[250681]: 2025-12-01 20:55:21.135260858 +0000 UTC m=+0.158981744 container start b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:55:21 compute-0 podman[250681]: 2025-12-01 20:55:21.138959993 +0000 UTC m=+0.162680889 container attach b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noyce, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:55:21 compute-0 lvm[250776]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:55:21 compute-0 lvm[250776]: VG ceph_vg0 finished
Dec 01 20:55:21 compute-0 lvm[250775]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:55:21 compute-0 lvm[250775]: VG ceph_vg1 finished
Dec 01 20:55:21 compute-0 lvm[250778]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:55:21 compute-0 lvm[250778]: VG ceph_vg2 finished
Dec 01 20:55:21 compute-0 hungry_noyce[250697]: {}
Dec 01 20:55:21 compute-0 systemd[1]: libpod-b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2.scope: Deactivated successfully.
Dec 01 20:55:21 compute-0 systemd[1]: libpod-b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2.scope: Consumed 1.209s CPU time.
Dec 01 20:55:21 compute-0 podman[250681]: 2025-12-01 20:55:21.917864749 +0000 UTC m=+0.941585605 container died b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-678ce0a0a504ff62b4eefbd1937db953c64c6042f803c6c3aa73e6e124c2a056-merged.mount: Deactivated successfully.
Dec 01 20:55:21 compute-0 podman[250681]: 2025-12-01 20:55:21.966940447 +0000 UTC m=+0.990661293 container remove b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_noyce, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 01 20:55:21 compute-0 systemd[1]: libpod-conmon-b06547e196534418dbdd1b26635b7afff31b3d37fac7d07e574475bf7bdbebb2.scope: Deactivated successfully.
Dec 01 20:55:21 compute-0 ceph-mon[75880]: pgmap v814: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:22 compute-0 sudo[250603]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:55:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:55:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:55:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:55:22 compute-0 sudo[250793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:55:22 compute-0 sudo[250793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:55:22 compute-0 sudo[250793]: pam_unix(sudo:session): session closed for user root
Dec 01 20:55:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v815: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:55:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:55:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:24 compute-0 ceph-mon[75880]: pgmap v815: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v816: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:26 compute-0 podman[250818]: 2025-12-01 20:55:26.132374642 +0000 UTC m=+0.080575900 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 01 20:55:26 compute-0 ceph-mon[75880]: pgmap v816: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:28 compute-0 ceph-mon[75880]: pgmap v817: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:29 compute-0 podman[250838]: 2025-12-01 20:55:29.107069824 +0000 UTC m=+0.060027842 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec 01 20:55:29 compute-0 podman[250839]: 2025-12-01 20:55:29.120082169 +0000 UTC m=+0.079143297 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:55:30 compute-0 ceph-mon[75880]: pgmap v818: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:32 compute-0 ceph-mon[75880]: pgmap v819: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:55:32
Dec 01 20:55:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:55:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:55:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr', 'backups', 'volumes']
Dec 01 20:55:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:55:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:55:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:55:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:33 compute-0 ceph-mon[75880]: pgmap v820: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v821: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:36 compute-0 ceph-mon[75880]: pgmap v821: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:38 compute-0 ceph-mon[75880]: pgmap v822: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:38 compute-0 sshd-session[250882]: Accepted publickey for zuul from 192.168.122.10 port 50098 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:55:38 compute-0 systemd-logind[796]: New session 53 of user zuul.
Dec 01 20:55:38 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 01 20:55:38 compute-0 sshd-session[250882]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:55:38 compute-0 sudo[250886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 01 20:55:38 compute-0 sudo[250886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:55:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:40 compute-0 ceph-mon[75880]: pgmap v823: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:55:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:41 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14694 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:42 compute-0 ceph-mon[75880]: pgmap v824: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:42 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 01 20:55:42 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366627949' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:55:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:43 compute-0 ceph-mon[75880]: from='client.14694 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:43 compute-0 ceph-mon[75880]: from='client.14696 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:43 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2366627949' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:55:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:44 compute-0 ceph-mon[75880]: pgmap v825: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:55:44.358 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:55:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:55:44.359 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:55:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:55:44.359 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:55:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v826: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:45 compute-0 ovs-vsctl[251148]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 01 20:55:46 compute-0 ceph-mon[75880]: pgmap v826: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:46 compute-0 virtqemud[244294]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 01 20:55:46 compute-0 virtqemud[244294]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 01 20:55:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:46 compute-0 virtqemud[244294]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 01 20:55:47 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: cache status {prefix=cache status} (starting...)
Dec 01 20:55:47 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: client ls {prefix=client ls} (starting...)
Dec 01 20:55:47 compute-0 lvm[251484]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:55:47 compute-0 lvm[251484]: VG ceph_vg0 finished
Dec 01 20:55:47 compute-0 lvm[251492]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:55:47 compute-0 lvm[251492]: VG ceph_vg2 finished
Dec 01 20:55:47 compute-0 lvm[251515]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:55:47 compute-0 lvm[251515]: VG ceph_vg1 finished
Dec 01 20:55:48 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14700 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:48 compute-0 ceph-mon[75880]: pgmap v827: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:48 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: damage ls {prefix=damage ls} (starting...)
Dec 01 20:55:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:48 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump loads {prefix=dump loads} (starting...)
Dec 01 20:55:48 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14702 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:48 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 01 20:55:48 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 01 20:55:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:48 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 01 20:55:49 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 01 20:55:49 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14706 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 01 20:55:49 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252299365' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 01 20:55:49 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 01 20:55:49 compute-0 ceph-mon[75880]: from='client.14700 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3252299365' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 01 20:55:49 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 01 20:55:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:55:49 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1222815813' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:55:49 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14710 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:49 compute-0 ceph-mgr[76174]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 20:55:49 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: 2025-12-01T20:55:49.756+0000 7f311224f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 20:55:49 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: ops {prefix=ops} (starting...)
Dec 01 20:55:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 01 20:55:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590736529' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 01 20:55:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 01 20:55:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470767447' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 01 20:55:50 compute-0 ceph-mon[75880]: from='client.14702 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:50 compute-0 ceph-mon[75880]: pgmap v828: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:50 compute-0 ceph-mon[75880]: from='client.14706 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:50 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1222815813' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:55:50 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2590736529' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 01 20:55:50 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/470767447' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 01 20:55:50 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: session ls {prefix=session ls} (starting...)
Dec 01 20:55:50 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: status {prefix=status} (starting...)
Dec 01 20:55:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 01 20:55:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463598758' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 01 20:55:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 01 20:55:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207743265' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 20:55:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:51 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14720 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 01 20:55:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963913889' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 20:55:51 compute-0 ceph-mon[75880]: from='client.14710 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3463598758' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 01 20:55:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/207743265' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 20:55:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/963913889' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 20:55:51 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14724 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 01 20:55:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402597106' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 01 20:55:52 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804782506' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 01 20:55:52 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078803183' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: pgmap v829: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:52 compute-0 ceph-mon[75880]: from='client.14720 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: from='client.14724 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/402597106' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1804782506' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3078803183' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:55:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 01 20:55:52 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799760397' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 01 20:55:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v830: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 01 20:55:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/865880472' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 01 20:55:53 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14736 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:53 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: 2025-12-01T20:55:53.336+0000 7f311224f640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 01 20:55:53 compute-0 ceph-mgr[76174]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 01 20:55:53 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2799760397' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 01 20:55:53 compute-0 ceph-mon[75880]: pgmap v830: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:53 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/865880472' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 01 20:55:53 compute-0 ceph-mon[75880]: from='client.14736 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 01 20:55:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924637919' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 20:55:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 01 20:55:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466525599' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 01 20:55:54 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14742 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:54 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14746 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v831: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:54 compute-0 nova_compute[244568]: 2025-12-01 20:55:54.960 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:55:54 compute-0 nova_compute[244568]: 2025-12-01 20:55:54.963 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:55:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 01 20:55:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76264320' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007677 3 0.000118
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008237 3 0.000116
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008188 3 0.000082
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=18/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008188 3 0.000038
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=18/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=18/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=18/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008470 3 0.000701
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008379 3 0.000269
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008299 3 0.000732
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008137 3 0.000867
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008011 3 0.000085
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007911 3 0.000205
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008062 3 0.000293
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007829 3 0.000373
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007907 3 0.000155
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008266 3 0.000870
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007816 3 0.000115
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007817 3 0.000348
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007726 3 0.000188
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007714 3 0.000095
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007651 3 0.000510
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007713 3 0.000275
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008348 3 0.001111
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008741 3 0.001670
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000029 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/18 les/c/f=36/19/0 sis=35) [2] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 56991744 unmapped: 1720320 heap: 58712064 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:06.412100+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 5 sent 3 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:36.391582+0000 osd.2 (osd.2) 4 : cluster [DBG] 2.1f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:36.402130+0000 osd.2 (osd.2) 5 : cluster [DBG] 2.1f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 36 heartbeat osd_stat(store_statfs(0x4fe16d000/0x0/0x4ffc00000, data 0x22053/0x59000, compress 0x0/0x0/0x0, omap 0x36ab, meta 0x1a2c955), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 262991 data_alloc: 218103808 data_used: 593
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 56877056 unmapped: 1835008 heap: 58712064 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 5)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:36.391582+0000 osd.2 (osd.2) 4 : cluster [DBG] 2.1f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:36.402130+0000 osd.2 (osd.2) 5 : cluster [DBG] 2.1f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 36 handle_osd_map epochs [37,37], i have 36, src has [1,37]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:07.412295+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 56926208 unmapped: 1785856 heap: 58712064 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:08.412456+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 56934400 unmapped: 1777664 heap: 58712064 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:09.412610+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:38.446033+0000 osd.2 (osd.2) 6 : cluster [DBG] 2.1d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:38.456593+0000 osd.2 (osd.2) 7 : cluster [DBG] 2.1d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 37 handle_osd_map epochs [38,38], i have 37, src has [1,38]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 57024512 unmapped: 1687552 heap: 58712064 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:10.412814+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 4 last_log 9 sent 7 num 4 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:40.393459+0000 osd.2 (osd.2) 8 : cluster [DBG] 2.b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:40.404056+0000 osd.2 (osd.2) 9 : cluster [DBG] 2.b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 38 heartbeat osd_stat(store_statfs(0x4fe169000/0x0/0x4ffc00000, data 0x24f00/0x5f000, compress 0x0/0x0/0x0, omap 0x3bc1, meta 0x1a2c43f), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 38 handle_osd_map epochs [39,39], i have 38, src has [1,39]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 61.348998 47 0.000344
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 61.353245 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 62.364930 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] exit Started 62.364961 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 84.990226746s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.2(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.3(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.4(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.5(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.6(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.7(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.8(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.9(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.a(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.b(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.c(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.d(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.e(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.f(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.10(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.11(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.12(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.13(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.14(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.15(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.16(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.17(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.18(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.19(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1a(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1b(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1c(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1d(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1e(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1f(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] exit Reset 0.003364 1 0.000172
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] exit Start 0.000014 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002178 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002882 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004050 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004018 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002487 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002550 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002569 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002550 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.002574 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005768 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003579 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003642 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003647 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003692 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003728 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.003728 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004856 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005682 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.006060 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.006068 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005145 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005127 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005151 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005128 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.006269 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005952 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005937 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005986 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005975 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005968 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.005939 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=0 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58318848 unmapped: 393216 heap: 58712064 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 39 handle_osd_map epochs [39,40], i have 39, src has [1,40]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 39 handle_osd_map epochs [39,40], i have 40, src has [1,40]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558636 4 0.000076
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558551 4 0.000050
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000017 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000066 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558556 4 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000059 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000096 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558491 4 0.000049
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000072 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000031 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557339 4 0.000029
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557231 4 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000049 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557299 4 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.560085 4 0.000055
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000040 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000096 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000051 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.559402 4 0.000069
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000194 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558649 4 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000119 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558546 4 0.000058
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000049 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000045 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000041 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558547 4 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000030 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557422 4 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557508 4 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000036 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557386 4 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557361 4 0.000025
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000034 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000071 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000036 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.559688 4 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.561559 3 0.000163
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering pruub 84.990226746s@ mbc={}] exit Started/Primary/Peering 0.561668 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.651715279s) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 84.990226746s@ mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000030 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.559807 4 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558329 4 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000031 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000020 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557884 4 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000033 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558245 4 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558388 4 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000036 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000074 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000036 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.559407 4 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558328 4 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557728 4 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000054 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000082 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000041 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.557912 4 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.560152 4 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.559028 4 0.000091
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.558905 4 0.000066
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000033 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000034 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000066 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.559188 4 0.000045
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000029 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.559373 4 0.000036
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000027 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000157 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 7)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:38.446033+0000 osd.2 (osd.2) 6 : cluster [DBG] 2.1d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:38.456593+0000 osd.2 (osd.2) 7 : cluster [DBG] 2.1d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000230 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 9)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:40.393459+0000 osd.2 (osd.2) 8 : cluster [DBG] 2.b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:40.404056+0000 osd.2 (osd.2) 9 : cluster [DBG] 2.b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004213 3 0.000167
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004242 3 0.000205
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004131 3 0.000099
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004115 3 0.000106
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004246 3 0.000232
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004055 3 0.000115
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004001 3 0.000208
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004001 3 0.000141
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004078 3 0.000399
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004059 3 0.000098
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003993 3 0.000080
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004188 3 0.000133
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006223 3 0.000102
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006273 3 0.000193
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006317 3 0.000096
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006475 3 0.000154
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006447 3 0.000070
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006517 3 0.000068
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006284 3 0.000094
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006444 3 0.000130
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006396 3 0.000094
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006301 3 0.000150
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006282 3 0.000083
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006178 3 0.000135
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006224 3 0.000113
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006226 3 0.000099
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006093 3 0.000085
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006052 3 0.000137
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006095 3 0.000112
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006043 3 0.000076
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005992 3 0.000064
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005874 3 0.000348
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/22 les/c/f=40/23/0 sis=39) [2] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:11.412942+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 324805 data_alloc: 218103808 data_used: 593
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58515456 unmapped: 1245184 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:12.413077+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58654720 unmapped: 1105920 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:13.413266+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:42.454475+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:42.465026+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 40 handle_osd_map epochs [41,41], i have 40, src has [1,41]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.101499557s of 10.184075356s, submitted: 210
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 11)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:42.454475+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:42.465026+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58687488 unmapped: 1073152 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:14.413456+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 41 heartbeat osd_stat(store_statfs(0x4fe162000/0x0/0x4ffc00000, data 0x2925f/0x68000, compress 0x0/0x0/0x0, omap 0x4362, meta 0x1a2bc9e), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58695680 unmapped: 1064960 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:15.413603+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58736640 unmapped: 1024000 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:16.413742+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 329062 data_alloc: 218103808 data_used: 593
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58744832 unmapped: 1015808 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:17.413903+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58720256 unmapped: 1040384 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:18.414053+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:48.377635+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:48.388234+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 58769408 unmapped: 991232 heap: 59760640 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:19.414253+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 41 heartbeat osd_stat(store_statfs(0x4fe164000/0x0/0x4ffc00000, data 0x2925f/0x68000, compress 0x0/0x0/0x0, omap 0x4362, meta 0x1a2bc9e), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 13)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:48.377635+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:48.388234+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 41 handle_osd_map epochs [42,42], i have 41, src has [1,42]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000016
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000259 1 0.000038
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000019
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000075 1 0.000046
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000124 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000181 1 0.000060
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000016
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000088 1 0.000040
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000013
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000029
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000024
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.550566 17 0.000116
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.558313 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.558362 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.281690 5 0.000054
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.285979 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.286088 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.286133 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.718297958s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907165527s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.558713 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.448785782s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637702942s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.281709 5 0.000073
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.286022 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.286116 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.286153 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.718235016s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907249451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715600967s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907165527s@ mbc={}] exit Reset 0.002727 1 0.002750
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715600967s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907165527s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715600967s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907165527s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715600967s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907165527s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715600967s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907165527s@ mbc={}] exit Start 0.000012 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715600967s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907165527s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445893288s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637702942s@ mbc={}] exit Reset 0.002929 1 0.003261
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445893288s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637702942s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445893288s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637702942s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445893288s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637702942s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445893288s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637702942s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445893288s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637702942s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.553990 17 0.000077
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.561851 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.561918 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.561961 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.553947 17 0.000083
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.561744 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.561917 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445705414s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637687683s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.561950 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445675850s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637687683s@ mbc={}] exit Reset 0.000066 1 0.000102
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445675850s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637687683s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445738792s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637771606s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445675850s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637687683s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445675850s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637687683s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445675850s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637687683s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445720673s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] exit Reset 0.000040 1 0.000075
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445675850s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637687683s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445720673s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445720673s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445720673s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445720673s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445720673s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.554128 17 0.000207
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.561852 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.562199 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.562228 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445558548s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637771606s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445539474s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] exit Reset 0.000042 1 0.000073
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445539474s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445539474s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445539474s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445539474s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445539474s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637771606s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.285087 5 0.000318
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289129 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.554420 17 0.000132
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289267 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.562382 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289300 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.554376 17 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.562434 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.562235 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.562457 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.562373 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714890480s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907226562s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.562420 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714875221s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] exit Reset 0.000032 1 0.000055
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714875221s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714875221s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714875221s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714875221s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445172310s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637535095s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714875221s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445319176s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637695312s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445153236s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637535095s@ mbc={}] exit Reset 0.000050 1 0.000078
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445153236s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637535095s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445153236s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637535095s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445153236s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637535095s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445153236s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637535095s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445299149s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637695312s@ mbc={}] exit Reset 0.000044 1 0.000078
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445153236s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637535095s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.285214 5 0.000052
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445299149s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637695312s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289356 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289407 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445299149s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637695312s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445299149s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637695312s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289430 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445299149s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637695312s@ mbc={}] exit Start 0.000013 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.445299149s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637695312s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714809418s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907249451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.285271 5 0.000067
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289356 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714799881s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] exit Reset 0.000045 1 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289418 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714799881s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714799881s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289449 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714799881s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714799881s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714799881s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.554706 17 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714690208s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907188416s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.562744 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.562795 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714676857s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907188416s@ mbc={}] exit Reset 0.000025 1 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714676857s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907188416s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714676857s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907188416s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714676857s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907188416s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.562821 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714676857s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907188416s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714676857s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907188416s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.285283 5 0.000029
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289310 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289370 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289407 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444966316s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637519836s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714663506s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907226562s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714652061s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] exit Reset 0.000024 1 0.000041
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.285274 5 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714652061s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289359 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714652061s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714652061s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289419 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714652061s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444934845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637519836s@ mbc={}] exit Reset 0.000054 1 0.000084
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714652061s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907226562s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289437 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444934845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637519836s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444934845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637519836s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714756012s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907356262s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444934845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637519836s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444934845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637519836s@ mbc={}] exit Start 0.000012 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714745522s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907356262s@ mbc={}] exit Reset 0.000021 1 0.000037
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714745522s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907356262s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444934845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637519836s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714745522s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907356262s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714745522s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907356262s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714745522s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907356262s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714745522s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907356262s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.554775 17 0.000082
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.562877 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.563111 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.563131 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444891930s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637565613s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444880486s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637565613s@ mbc={}] exit Reset 0.000022 1 0.000054
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444880486s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637565613s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.285221 5 0.000038
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444880486s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637565613s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289437 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444880486s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637565613s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289491 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444880486s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637565613s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444880486s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637565613s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289515 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714426994s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.907234192s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714411736s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907234192s@ mbc={}] exit Reset 0.000117 1 0.000132
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714411736s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907234192s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714411736s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907234192s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714411736s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907234192s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714411736s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907234192s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.714411736s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907234192s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.554847 17 0.000055
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.563232 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.563371 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.563483 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283308 5 0.000052
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444160461s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.637107849s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289564 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289610 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444144249s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637107849s@ mbc={}] exit Reset 0.000028 1 0.000047
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289629 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444144249s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637107849s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444144249s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637107849s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.554598 17 0.000081
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444144249s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637107849s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444144249s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637107849s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.563390 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.444144249s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.637107849s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716694832s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.909675598s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.563600 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.563624 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716678619s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909675598s@ mbc={}] exit Reset 0.000035 1 0.000053
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716678619s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909675598s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716678619s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909675598s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716678619s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909675598s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716678619s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909675598s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716678619s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909675598s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443941116s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636978149s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443918228s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636978149s@ mbc={}] exit Reset 0.000053 1 0.000093
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443918228s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636978149s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283011 5 0.000081
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443918228s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636978149s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289502 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443918228s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636978149s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289542 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443918228s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636978149s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289558 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443918228s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636978149s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.555590 17 0.000105
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.563825 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.563868 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716846466s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.909996033s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.563890 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716833115s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909996033s@ mbc={}] exit Reset 0.000073 1 0.000090
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716833115s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909996033s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716833115s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909996033s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716833115s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909996033s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716833115s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909996033s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716833115s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.909996033s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443575859s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636749268s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443558693s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] exit Reset 0.000035 1 0.000055
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443558693s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443558693s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443558693s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443558693s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443558693s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283117 5 0.000036
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289536 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.553796 17 0.001814
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289583 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.556331 17 0.000113
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.563939 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.564048 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289614 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.564120 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.564046 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.564141 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.564093 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716737747s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910011292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443473816s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636749268s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716727257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] exit Reset 0.000030 1 0.000050
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443461418s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] exit Reset 0.000023 1 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716727257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443461418s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716727257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443461418s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716727257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443461418s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716727257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443461418s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716727257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443461418s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636749268s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443627357s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636924744s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443606377s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636924744s@ mbc={}] exit Reset 0.000066 1 0.000087
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443606377s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636924744s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443606377s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636924744s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443606377s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636924744s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443606377s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636924744s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443606377s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636924744s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283319 5 0.000068
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289678 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289777 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289819 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283426 5 0.000094
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289752 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.289798 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.289819 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.556703 17 0.000620
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.564431 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.564474 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716434479s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910003662s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.564501 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716445923s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910011292s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716422081s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910003662s@ mbc={}] exit Reset 0.000028 1 0.000053
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716422081s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910003662s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716422081s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910003662s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716422081s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910003662s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716422081s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910003662s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443201065s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636795044s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716422081s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910003662s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443183899s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636795044s@ mbc={}] exit Reset 0.000033 1 0.000093
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443183899s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636795044s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716404915s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] exit Reset 0.000088 1 0.000140
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443183899s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636795044s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.556904 17 0.000090
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716404915s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443183899s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636795044s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716404915s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443183899s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636795044s@ mbc={}] exit Start 0.000155 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.443183899s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636795044s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716404915s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716404915s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] exit Start 0.000021 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.564732 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.564780 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.564800 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283688 5 0.000071
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289993 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.290039 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.290061 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442579269s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636421204s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442564011s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636421204s@ mbc={}] exit Reset 0.000032 1 0.000195
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442564011s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636421204s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442564011s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636421204s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442564011s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636421204s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442564011s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636421204s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716152191s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910018921s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442564011s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636421204s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716404915s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910011292s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716130257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910018921s@ mbc={}] exit Reset 0.000055 1 0.000063
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716130257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910018921s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716130257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910018921s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716130257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910018921s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716130257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910018921s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716130257s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910018921s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.558051 17 0.000104
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.564894 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.564933 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.564948 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283792 5 0.000069
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442068100s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636016846s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.289994 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442056656s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636016846s@ mbc={}] exit Reset 0.000022 1 0.000038
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.290091 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.290110 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442056656s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636016846s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442056656s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636016846s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442056656s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636016846s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442056656s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636016846s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442056656s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636016846s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716066360s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910041809s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716053009s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910041809s@ mbc={}] exit Reset 0.000023 1 0.000046
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716053009s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910041809s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716053009s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910041809s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716053009s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910041809s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716053009s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910041809s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716053009s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910041809s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283823 5 0.000056
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.290084 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.290152 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.558194 17 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.290174 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.565076 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.557650 17 0.000098
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.565002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.565125 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.565106 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716078758s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910140991s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.565142 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.565152 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716067314s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910140991s@ mbc={}] exit Reset 0.000023 1 0.000042
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716067314s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910140991s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442327499s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636405945s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716067314s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910140991s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716067314s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910140991s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716067314s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910140991s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.716067314s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910140991s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442316055s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636405945s@ mbc={}] exit Reset 0.000022 1 0.000040
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442316055s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636405945s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442316055s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636405945s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442316055s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636405945s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441908836s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.636001587s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442316055s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636405945s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.442316055s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636405945s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441880226s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636001587s@ mbc={}] exit Reset 0.000053 1 0.000092
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.557848 17 0.000079
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441880226s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636001587s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.565205 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441880226s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636001587s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.565251 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441880226s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636001587s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.565274 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441880226s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636001587s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.558374 17 0.000090
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.565287 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441880226s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.636001587s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.565331 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441657066s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635810852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.565348 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441648483s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] exit Reset 0.000020 1 0.000037
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441648483s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441648483s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441601753s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635780334s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441648483s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441648483s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441648483s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441589355s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635780334s@ mbc={}] exit Reset 0.000022 1 0.000041
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441589355s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635780334s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441589355s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635780334s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441589355s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635780334s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441589355s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635780334s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441589355s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635780334s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.283991 5 0.000053
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.290071 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.290149 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.290180 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715913773s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910156250s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.558453 17 0.000069
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.565529 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.558515 17 0.000111
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715903282s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910156250s@ mbc={}] exit Reset 0.000021 1 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.565475 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715903282s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910156250s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.565645 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.565641 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715903282s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910156250s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715903282s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910156250s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715903282s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910156250s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715903282s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910156250s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.565686 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.565705 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441512108s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635810852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441550255s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.635856628s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441536903s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635856628s@ mbc={}] exit Reset 0.000026 1 0.000064
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441493034s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] exit Reset 0.000037 1 0.000069
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441536903s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635856628s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441493034s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441536903s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635856628s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441536903s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635856628s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441493034s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.284071 5 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441536903s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635856628s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441493034s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441536903s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635856628s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.290188 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441493034s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.290238 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.441493034s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.635810852s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.290260 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715862274s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910232544s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.284109 5 0.000054
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.290190 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.561946 17 0.000203
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.290232 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.565719 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715847969s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910232544s@ mbc={}] exit Reset 0.000052 1 0.000055
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.565843 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715847969s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910232544s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715847969s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910232544s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.565865 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715847969s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910232544s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715847969s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910232544s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715847969s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910232544s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437717438s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 active pruub 93.632148743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437705040s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.632148743s@ mbc={}] exit Reset 0.000024 1 0.000041
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437705040s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.632148743s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437705040s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.632148743s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437705040s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.632148743s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437705040s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.632148743s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=10.437705040s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY pruub 93.632148743s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.284171 5 0.000048
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.290195 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.290231 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.290247 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715730667s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910224915s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715719223s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910224915s@ mbc={}] exit Reset 0.000023 1 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008343 2 0.000045
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715719223s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910224915s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715719223s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910224915s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715719223s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910224915s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715719223s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910224915s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715719223s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910224915s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008108 2 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.290254 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007826 2 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=39) [2] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715569496s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 active pruub 98.910163879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715547562s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910163879s@ mbc={}] exit Reset 0.000057 1 0.000224
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715547562s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910163879s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715547562s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910163879s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715547562s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910163879s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715547562s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910163879s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.715547562s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.910163879s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000088 1 0.000037
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.712153435s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] exit Reset 0.006131 1 0.006178
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.712153435s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.712153435s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000111 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.712153435s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.712153435s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] exit Start 0.000016 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=15.712153435s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY pruub 98.907249451s@ mbc={}] enter Started/Stray
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000078
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000307 1 0.000142
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000071 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000024
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000086 1 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000025
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000037 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000020 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000157 1 0.000042
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000029
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000077 1 0.000055
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000034
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000029
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000021 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000023
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000034
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000025
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000052
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000071 1 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007531 2 0.000084
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007343 2 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008410 2 0.000033
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007237 2 0.000038
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006903 2 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006826 2 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007474 2 0.000024
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006511 2 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006304 2 0.000025
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006121 2 0.000024
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005954 2 0.000023
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005876 2 0.000023
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005733 2 0.000024
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005562 2 0.000023
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005410 2 0.000024
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007829 2 0.000022
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007181 2 0.000045
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020389 2 0.000065
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020163 2 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019942 2 0.000029
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019773 2 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019924 2 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020444 2 0.000025
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008207 2 0.000077
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007634 2 0.000072
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006911 2 0.000086
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 245760 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:20.414440+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 42 handle_osd_map epochs [43,43], i have 42, src has [1,43]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 42 handle_osd_map epochs [42,43], i have 43, src has [1,43]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006447 2 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014673 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006454 2 0.000080
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014427 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006511 2 0.000047
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015169 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993452 2 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014059 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997218 2 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003617 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996823 2 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004093 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997288 2 0.000034
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003893 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993464 2 0.000051
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013745 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997289 2 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003321 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993456 2 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997322 2 0.000018
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013498 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003532 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997258 2 0.000347
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004494 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997022 2 0.000054
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004927 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997750 2 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004760 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997584 2 0.000042
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005251 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992028 2 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999072 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998181 2 0.000018
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005489 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993588 2 0.000016
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013440 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993262 2 0.000029
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013273 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998321 2 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006846 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998417 2 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006318 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998395 2 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005861 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992637 2 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013168 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992345 2 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000788 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998476 2 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004281 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998586 2 0.000191
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004542 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998572 2 0.000016
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004212 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993468 2 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001276 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998636 2 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004154 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004638 4 0.000100
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004700 4 0.000090
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014599 7 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014575 7 0.000173
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014606 7 0.000068
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014542 7 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014858 7 0.000040
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013115 7 0.000045
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013676 7 0.000036
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013093 7 0.000037
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000091 1 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012865 7 0.000048
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013424 7 0.000041
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014065 7 0.000036
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014398 7 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013045 7 0.000074
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013057 7 0.000042
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014051 7 0.000063
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014242 7 0.000034
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013635 7 0.000293
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000205 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000326 1 0.000012
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000413 1 0.000058
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000445 1 0.000012
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000482 1 0.000018
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000525 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000549 1 0.000013
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000545 1 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000569 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000598 1 0.000010
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000634 1 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000697 1 0.000018
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000747 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000812 1 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000862 1 0.000018
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001042 1 0.000267
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007672 4 0.000071
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007765 4 0.000078
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007802 4 0.000187
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007947 4 0.000138
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007890 4 0.000042
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007948 4 0.000036
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007916 4 0.000034
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007923 4 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007882 4 0.000198
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007910 4 0.000045
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007894 4 0.000050
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007875 4 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007838 4 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007880 4 0.000041
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007817 4 0.000045
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007881 4 0.000073
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007804 4 0.000047
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007611 4 0.000202
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007793 4 0.000082
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007896 4 0.000081
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008058 4 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006846 4 0.000057
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006798 4 0.000052
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006673 4 0.000054
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006813 4 0.000057
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006685 4 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007817 4 0.000063
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018910 7 0.000048
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018961 7 0.000076
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015844 7 0.000118
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019178 7 0.000079
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000065 1 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018703 7 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018568 7 0.000072
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000089 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018856 7 0.000065
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000141 1 0.000011
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000181 1 0.000012
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000186 1 0.000036
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000242 1 0.000015
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000283 1 0.000088
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022983 7 0.000072
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021248 7 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022953 7 0.000054
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022414 7 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021718 7 0.000192
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021627 7 0.000035
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022228 7 0.000050
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021424 7 0.000034
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000106 1 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022711 7 0.000107
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000147 1 0.000060
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008326 1 0.000054
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022524 7 0.000085
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.008491 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.023125 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000248 1 0.000078
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000235 1 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000286 1 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000400 1 0.000105
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022833 7 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021776 7 0.000032
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024390 7 0.000159
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021996 7 0.000033
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022075 7 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022119 7 0.000071
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000700 1 0.000020
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000796 1 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001155 1 0.000057
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001208 1 0.000164
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000883 1 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000978 1 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000991 1 0.000013
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001045 1 0.000033
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022713 7 0.000058
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022641 7 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022497 7 0.000103
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001048 1 0.000027
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001135 1 0.000076
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000151 1 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000172 1 0.000016
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000167 1 0.000059
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.012048 1 0.000107
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.012280 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.026928 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.019003 1 0.000082
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.019358 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.033920 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026305 1 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.026774 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.041375 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033725 1 0.000047
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034211 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.049090 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.041006 1 0.000026
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041515 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.054662 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.048316 1 0.000045
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.048868 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.062566 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.055829 1 0.000019
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.056406 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.069518 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.063035 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.063632 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.077719 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.070438 1 0.000041
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.071010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.084463 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.077909 1 0.000016
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.078535 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.092957 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.085155 1 0.000016
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085815 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.098890 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.092491 1 0.000017
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.093213 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.106295 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.099819 1 0.000015
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.100590 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.114670 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.107156 1 0.000015
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.107989 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.122254 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.114507 1 0.000015
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.115393 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.129087 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.121833 1 0.000013
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.122901 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.135785 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126594 1 0.000022
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.126682 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.145630 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.133929 1 0.000040
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.134061 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.153058 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.141340 1 0.000019
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.141506 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.157397 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.148655 1 0.000038
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.148861 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.168065 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.156001 1 0.000052
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.156223 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.174967 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.163298 1 0.000042
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.163575 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.182461 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.170647 1 0.000031
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.170965 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.189578 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.174034 1 0.000052
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.174162 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.197193 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.181300 1 0.000086
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.181493 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.202769 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.188587 1 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.188857 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.211838 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.195952 1 0.000037
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.196221 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.218120 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.203380 1 0.000030
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.203700 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.225366 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.210718 1 0.000028
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.211150 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.233602 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.218010 1 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.218767 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.240222 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.225060 1 0.000037
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.225885 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.248140 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.232122 1 0.000087
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233308 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.256058 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.239289 1 0.000052
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.240522 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.263118 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.246744 1 0.000072
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.247692 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.270559 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.254134 1 0.000023
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.255153 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276958 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.261378 1 0.000019
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.262400 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.284418 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.268793 1 0.000018
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.269867 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.294315 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.276421 1 0.000079
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.277549 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.299718 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.283623 1 0.000021
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.284796 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.306922 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.290775 1 0.000018
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.290950 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.313699 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.298146 1 0.000015
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.298339 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.321000 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.305530 1 0.000015
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.305740 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.328306 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 1015808 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:21.414583+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 43 handle_osd_map epochs [43,44], i have 43, src has [1,44]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 308916 data_alloc: 218103808 data_used: 593
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 983040 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:22.414736+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:52.375886+0000 osd.2 (osd.2) 14 : cluster [DBG] 5.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:52.386469+0000 osd.2 (osd.2) 15 : cluster [DBG] 5.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 15)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:52.375886+0000 osd.2 (osd.2) 14 : cluster [DBG] 5.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:52.386469+0000 osd.2 (osd.2) 15 : cluster [DBG] 5.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 909312 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:23.414977+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:53.360950+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:53.371540+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 17)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:53.360950+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:53.371540+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 901120 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.157582283s of 10.313741684s, submitted: 326
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:24.415244+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:54.361103+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.1f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:54.371654+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.1f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 19)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:54.361103+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.1f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:54.371654+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.1f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 44 heartbeat osd_stat(store_statfs(0x4fe159000/0x0/0x4ffc00000, data 0x2d9e1/0x71000, compress 0x0/0x0/0x0, omap 0x4b03, meta 0x1a2b4fd), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 44 handle_osd_map epochs [45,46], i have 44, src has [1,46]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 44 handle_osd_map epochs [45,46], i have 46, src has [1,46]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 884736 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe159000/0x0/0x4ffc00000, data 0x2d9e1/0x71000, compress 0x0/0x0/0x0, omap 0x4b03, meta 0x1a2b4fd), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:25.415434+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 876544 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:26.415684+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:56.362671+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.10 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:56.373267+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.10 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 21)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:56.362671+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.10 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:56.373267+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.10 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 326852 data_alloc: 218103808 data_used: 593
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 868352 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:27.415973+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 778240 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:28.416133+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 770048 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 48 heartbeat osd_stat(store_statfs(0x4fe149000/0x0/0x4ffc00000, data 0x330ab/0x7d000, compress 0x0/0x0/0x0, omap 0x52a4, meta 0x1a2ad5c), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 48 handle_osd_map epochs [49,50], i have 48, src has [1,50]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 48 handle_osd_map epochs [49,49], i have 50, src has [1,49]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:29.416244+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 761856 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:30.416382+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 50 heartbeat osd_stat(store_statfs(0x4fe141000/0x0/0x4ffc00000, data 0x35b41/0x83000, compress 0x0/0x0/0x0, omap 0x552f, meta 0x1a2aad1), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 753664 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:31.416593+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 336528 data_alloc: 218103808 data_used: 593
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 753664 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:32.416845+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 50 handle_osd_map epochs [51,52], i have 50, src has [1,52]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=0 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000129 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=0 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000043
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000294 1 0.000075
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001174 2 0.000069
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 52 heartbeat osd_stat(store_statfs(0x4fe141000/0x0/0x4ffc00000, data 0x35b41/0x83000, compress 0x0/0x0/0x0, omap 0x552f, meta 0x1a2aad1), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 679936 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:33.417028+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:03.292070+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:03.302563+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 52 handle_osd_map epochs [52,53], i have 53, src has [1,53]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011456 2 0.000076
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013011 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002856 3 0.000463
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 23)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:03.292070+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:03.302563+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe141000/0x0/0x4ffc00000, data 0x3876d/0x89000, compress 0x0/0x0/0x0, omap 0x57ba, meta 0x1a2a846), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 671744 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:34.417255+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:04.263906+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.12 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:04.274430+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.12 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 25)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:04.263906+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.12 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:04.274430+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.12 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x39bed/0x8c000, compress 0x0/0x0/0x0, omap 0x5a45, meta 0x1a2a5bb), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 663552 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:35.417467+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 663552 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:36.417702+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 348824 data_alloc: 218103808 data_used: 593
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 663552 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.802775383s of 12.856064796s, submitted: 19
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:37.417891+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:07.217368+0000 osd.2 (osd.2) 26 : cluster [DBG] 2.10 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:07.227900+0000 osd.2 (osd.2) 27 : cluster [DBG] 2.10 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 27)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:07.217368+0000 osd.2 (osd.2) 26 : cluster [DBG] 2.10 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:07.227900+0000 osd.2 (osd.2) 27 : cluster [DBG] 2.10 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 630784 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:38.418127+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 622592 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:39.418477+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:09.230046+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.17 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:09.240643+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.17 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 29)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:09.230046+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.17 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:09.240643+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.17 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1662976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:40.418911+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:10.204372+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:10.214848+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 54 handle_osd_map epochs [55,56], i have 54, src has [1,56]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 31)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:10.204372+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:10.214848+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 56 heartbeat osd_stat(store_statfs(0x4fe13d000/0x0/0x4ffc00000, data 0x3b203/0x8f000, compress 0x0/0x0/0x0, omap 0x5cd0, meta 0x1a2a330), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 1597440 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:41.419175+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 364937 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 1695744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:42.419332+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 1654784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:43.419575+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 1646592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:44.419763+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:14.222545+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:14.233145+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 33)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:14.222545+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:14.233145+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe12b000/0x0/0x4ffc00000, data 0x4072f/0x9b000, compress 0x0/0x0/0x0, omap 0x6471, meta 0x1a29b8f), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 58 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 1744896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:45.420043+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:15.202976+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:15.213600+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 35)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:15.202976+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:15.213600+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 1695744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:46.420301+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:16.170714+0000 osd.2 (osd.2) 36 : cluster [DBG] 2.c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:16.181252+0000 osd.2 (osd.2) 37 : cluster [DBG] 2.c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe12e000/0x0/0x4ffc00000, data 0x41d4d/0x9e000, compress 0x0/0x0/0x0, omap 0x66fc, meta 0x1a29904), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 379334 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 37)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:16.170714+0000 osd.2 (osd.2) 36 : cluster [DBG] 2.c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:16.181252+0000 osd.2 (osd.2) 37 : cluster [DBG] 2.c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 1679360 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:47.420516+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe129000/0x0/0x4ffc00000, data 0x43363/0xa1000, compress 0x0/0x0/0x0, omap 0x6987, meta 0x1a29679), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.860921860s of 10.903360367s, submitted: 18
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 1671168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:48.420730+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:18.120793+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:18.131245+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 39)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:18.120793+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:18.131245+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1662976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:49.420949+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:19.114225+0000 osd.2 (osd.2) 40 : cluster [DBG] 2.0 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:19.124779+0000 osd.2 (osd.2) 41 : cluster [DBG] 2.0 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 41)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:19.114225+0000 osd.2 (osd.2) 40 : cluster [DBG] 2.0 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:19.124779+0000 osd.2 (osd.2) 41 : cluster [DBG] 2.0 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1662976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:50.421206+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 60 handle_osd_map epochs [62,63], i have 60, src has [1,63]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 60 handle_osd_map epochs [61,63], i have 60, src has [1,63]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 499712 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:51.421341+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 396828 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 450560 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:52.421466+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:22.061458+0000 osd.2 (osd.2) 42 : cluster [DBG] 5.0 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:22.071999+0000 osd.2 (osd.2) 43 : cluster [DBG] 5.0 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f(unlocked)] enter Initial
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=0 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=0 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000023
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000107 1 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001271 2 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 43)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:22.061458+0000 osd.2 (osd.2) 42 : cluster [DBG] 5.0 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:22.071999+0000 osd.2 (osd.2) 43 : cluster [DBG] 5.0 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.410014 2 0.000165
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.411495 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001976 3 0.000173
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000135 1 0.000108
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.137404 3 0.000039
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:53.421686+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11c000/0x0/0x4ffc00000, data 0x488a5/0xae000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 352256 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:54.421806+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:24.029842+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:24.040451+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 45)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:24.029842+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:24.040451+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:55.422092+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:56.422239+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409900 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:57.422460+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:27.019687+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.6 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:27.030218+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.6 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 47)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:27.019687+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.6 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:27.030218+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.6 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 303104 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:58.422690+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 303104 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:59.422871+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.782981873s of 11.941884995s, submitted: 20
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 303104 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:00.423168+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:30.062666+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:30.072779+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 49)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:30.062666+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:30.072779+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:01.423474+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413714 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:02.423683+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:03.423835+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:04.423999+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:05.424275+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:06.424467+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413714 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:07.424633+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:37.163610+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:37.174213+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 51)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:37.163610+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:37.174213+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:08.424845+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:38.126769+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.1b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:38.137375+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.1b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 53)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:38.126769+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.1b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:38.137375+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.1b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:09.425070+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:39.082620+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:39.093153+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.030481339s of 10.046459198s, submitted: 8
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 55)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:39.082620+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:39.093153+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:10.425312+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:40.109061+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.1b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:40.119614+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.1b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 57)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:40.109061+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.1b scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:40.119614+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.1b scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:11.425599+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423364 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:12.425778+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:13.425945+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:14.426089+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:15.426285+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:45.078921+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:45.089433+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 59)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:45.078921+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:45.089433+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:16.426557+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425777 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:17.426729+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 172032 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:18.426895+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:48.119043+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:48.129594+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 61)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:48.119043+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:48.129594+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 172032 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:19.427111+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.991535187s of 10.001205444s, submitted: 6
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:20.427235+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:50.110398+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.7 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:50.120998+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.7 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 63)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:50.110398+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.7 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:50.120998+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.7 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:21.427439+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430599 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:22.427566+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:23.427691+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:24.427922+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:54.137839+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.18 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:54.148419+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.18 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 65)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:54.137839+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.18 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:54.148419+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.18 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:25.428247+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:55.122167+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.2 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:55.132751+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.2 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 67)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:55.122167+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.2 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:55.132751+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.2 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:26.428550+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435423 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 106496 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:27.428845+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:57.132067+0000 osd.2 (osd.2) 68 : cluster [DBG] 7.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:57.142632+0000 osd.2 (osd.2) 69 : cluster [DBG] 7.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 69)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:57.132067+0000 osd.2 (osd.2) 68 : cluster [DBG] 7.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:57.142632+0000 osd.2 (osd.2) 69 : cluster [DBG] 7.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:28.429247+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:29.429476+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:59.196990+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:59.207500+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 71)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:59.196990+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.1 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:59.207500+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.1 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:30.429775+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:31.430015+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440245 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.937257767s of 12.093973160s, submitted: 10
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:32.430211+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:02.204453+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:02.215040+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 73)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:02.204453+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:02.215040+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:33.430429+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:03.239660+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.5 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:03.250240+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.5 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 75)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:03.239660+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.5 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:03.250240+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.5 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 57344 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:34.430652+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:04.234703+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.5 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:04.245243+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.5 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 77)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:04.234703+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.5 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:04.245243+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.5 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:35.430879+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:05.224808+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:05.235378+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 79)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:05.224808+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:05.235378+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:36.431077+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 449889 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:37.431225+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:07.277722+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:07.288298+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 81)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:07.277722+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:07.288298+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:38.431393+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:39.431603+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:40.431795+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:41.431959+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:11.296957+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:11.307478+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 83)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:11.296957+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:11.307478+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454711 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.079680443s of 10.133202553s, submitted: 12
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:42.432130+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:12.337689+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:12.348217+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 85)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:12.337689+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:12.348217+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:43.432326+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:44.432529+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:45.432707+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:46.432873+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 457124 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:47.433083+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:48.433306+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:18.278435+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.15 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:18.289054+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.15 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 87)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:18.278435+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.15 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:18.289054+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.15 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:49.433615+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:50.433894+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:51.434127+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 459537 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:52.434271+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.807907104s of 10.841655731s, submitted: 4
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:53.434503+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:23.179388+0000 osd.2 (osd.2) 88 : cluster [DBG] 4.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:23.190007+0000 osd.2 (osd.2) 89 : cluster [DBG] 4.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 89)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:23.179388+0000 osd.2 (osd.2) 88 : cluster [DBG] 4.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:23.190007+0000 osd.2 (osd.2) 89 : cluster [DBG] 4.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:54.434863+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:55.435019+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:56.435220+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:26.189468+0000 osd.2 (osd.2) 90 : cluster [DBG] 4.13 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:26.200069+0000 osd.2 (osd.2) 91 : cluster [DBG] 4.13 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 91)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:26.189468+0000 osd.2 (osd.2) 90 : cluster [DBG] 4.13 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:26.200069+0000 osd.2 (osd.2) 91 : cluster [DBG] 4.13 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464363 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:57.435444+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:58.435609+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:59.435734+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:00.435883+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:01.436149+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464363 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:02.436234+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:03.436392+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:04.436528+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.918931961s of 11.925214767s, submitted: 4
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:05.436752+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:35.104582+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.16 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:35.115127+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.16 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:06.436976+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 93)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:35.104582+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.16 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:35.115127+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.16 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 466776 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:07.437157+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:08.437237+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:38.090955+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:38.101522+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 95)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:38.090955+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:38.101522+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:09.437417+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:39.057932+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:39.068546+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 97)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:39.057932+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.e scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:39.068546+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.e scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:10.437628+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:40.058947+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:40.069522+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 99)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:40.058947+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:40.069522+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:11.437821+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:41.087741+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:41.098430+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476422 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 101)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:41.087741+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.11 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:41.098430+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.11 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:12.438065+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:13.438208+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:14.438334+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:44.101466+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:44.112205+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.202469826s of 10.040797234s, submitted: 12
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:15.438556+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 4 last_log 105 sent 103 num 4 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:45.145462+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:45.156059+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 103)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:44.101466+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.1c scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:44.112205+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.1c scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:16.438754+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 105)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:45.145462+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1a scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:45.156059+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1a scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481248 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:17.438911+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:47.154535+0000 osd.2 (osd.2) 106 : cluster [DBG] 3.1d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:47.165086+0000 osd.2 (osd.2) 107 : cluster [DBG] 3.1d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:18.439063+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 4 last_log 109 sent 107 num 4 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:48.137435+0000 osd.2 (osd.2) 108 : cluster [DBG] 3.18 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:48.147997+0000 osd.2 (osd.2) 109 : cluster [DBG] 3.18 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 107)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:47.154535+0000 osd.2 (osd.2) 106 : cluster [DBG] 3.1d scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:47.165086+0000 osd.2 (osd.2) 107 : cluster [DBG] 3.1d scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:19.439224+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 109)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:48.137435+0000 osd.2 (osd.2) 108 : cluster [DBG] 3.18 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:48.147997+0000 osd.2 (osd.2) 109 : cluster [DBG] 3.18 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:20.439390+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:21.439587+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486074 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:22.439846+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:23.439974+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:24.440271+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.685177803s of 10.026976585s, submitted: 6
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:25.440468+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:55.172445+0000 osd.2 (osd.2) 110 : cluster [DBG] 6.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:55.183044+0000 osd.2 (osd.2) 111 : cluster [DBG] 6.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 111)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:55.172445+0000 osd.2 (osd.2) 110 : cluster [DBG] 6.8 scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:55.183044+0000 osd.2 (osd.2) 111 : cluster [DBG] 6.8 scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:26.440732+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 488485 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:27.440983+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:57.153674+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:57.174867+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 113)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:57.153674+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.f scrub starts
Dec 01 20:55:55 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:57.174867+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.f scrub ok
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:28.441333+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:29.441600+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:30.441829+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:31.442016+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:32.442246+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:33.442447+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:34.442678+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:35.442913+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:36.443107+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:37.443323+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:38.443442+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:39.443621+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:40.443864+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:41.444092+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:42.444307+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:43.444518+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:44.444739+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:45.444983+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:46.445144+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:47.445416+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:48.445625+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:49.445951+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:50.446232+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:51.446444+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:52.446634+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:53.446804+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:54.446983+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:55.447247+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:56.447522+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:57.447775+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:58.447912+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:59.448223+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:00.448558+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:01.448765+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 573440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:02.448985+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 573440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:03.449153+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:04.449334+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:05.449518+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:06.449848+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:07.450341+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:08.450621+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:09.450854+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:10.451139+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:11.451515+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:12.451791+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:13.452034+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:14.452274+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:15.452598+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:16.452817+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:17.452962+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:18.453083+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:19.453217+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:20.453384+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:21.453543+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:22.453661+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:23.453803+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:24.454003+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:25.454202+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:26.454362+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:27.454494+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:28.454618+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:29.454790+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:30.455039+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:31.455216+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:32.455359+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:33.455546+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:34.455789+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:35.456058+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:36.456390+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:37.456602+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:38.456785+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:39.457067+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:40.457319+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:41.457562+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:42.457842+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:43.458112+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:44.458390+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:45.458635+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:46.458948+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:47.459286+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:48.459484+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:49.459713+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:50.459947+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:51.460147+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:52.460424+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:53.460712+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:54.460916+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:55.461100+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:56.461293+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:57.461517+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:58.461731+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:59.461903+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:00.462147+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:01.462358+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:02.462542+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:03.462713+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:04.462914+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:05.463144+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:06.463302+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:07.463466+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:08.463648+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:09.463806+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:10.464407+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:11.464550+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:12.464693+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:13.464837+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:14.464992+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:15.465117+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:16.465251+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:17.465354+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:18.465532+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:19.465678+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:20.466693+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:21.467395+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:22.467565+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:23.467816+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:24.468065+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:25.468305+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:26.468808+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:27.469499+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:28.469949+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:29.470239+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:30.470507+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:31.470780+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:32.470990+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:33.471229+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:34.471415+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:35.471691+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:36.471885+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:37.472078+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:38.472277+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:39.472504+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:40.472814+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:41.473289+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:42.473824+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:43.474335+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:44.474728+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:45.475052+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:46.475470+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:47.475651+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:48.475819+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:49.476008+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:50.476238+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:51.476459+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:52.476697+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:53.476923+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:54.477142+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:55.477358+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:56.477513+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:57.477735+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:58.477908+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:59.478147+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:00.478424+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:01.478737+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:02.478938+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:03.479103+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:04.479297+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:05.479489+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:06.479748+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:07.479960+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3924637919' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 20:55:55 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3466525599' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 01 20:55:55 compute-0 ceph-mon[75880]: from='client.14742 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:08.480163+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:09.480378+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:10.480610+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:11.480977+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:12.481307+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:13.481527+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:14.481713+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:15.481924+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:16.482151+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:17.482376+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:18.482544+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:19.482713+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:20.482919+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:21.483049+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:22.483161+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:23.483240+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:24.483369+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:25.483518+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:26.483607+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:27.483781+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:28.483942+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:29.484144+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:30.484395+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:31.484522+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:32.484697+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:33.484858+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:34.484994+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:35.485219+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:36.485404+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:37.485603+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:38.485867+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:39.486102+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:40.486276+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:41.486445+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:42.486625+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:43.486872+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:44.487299+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:45.487754+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:46.488016+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:47.488218+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:48.488385+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:49.488626+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:50.488924+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:51.489164+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:52.489433+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:53.489613+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:54.490083+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:55.490298+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:56.490543+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:57.490725+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:58.490897+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:59.491085+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:00.491338+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:01.491596+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:02.491828+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:03.492159+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:04.492377+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:05.492597+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:06.492895+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:07.493132+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:08.493347+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:09.493531+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:10.493722+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:11.493879+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:12.494061+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:13.494203+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:14.494343+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:15.494474+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:16.494596+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:17.494763+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:18.494928+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:19.495087+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:20.495275+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:21.495423+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:22.495577+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:23.495698+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:24.495855+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:25.496022+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:26.496195+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:27.496367+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:28.496547+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:29.496771+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:30.496955+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:31.497119+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:32.497290+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:33.497462+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:34.498037+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:35.498203+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:36.498341+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:37.498786+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:38.499165+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:39.499363+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:40.499566+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:41.499779+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:42.500088+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:43.500356+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:44.500576+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:45.500917+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:46.501313+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:47.501558+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:48.501752+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:49.502007+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:50.502274+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:51.502485+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:52.502717+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:53.502922+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:54.503109+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:55.503265+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:56.503428+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:57.503649+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:58.503892+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:59.504230+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:00.504453+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:01.504683+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:02.504879+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:03.505094+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:04.505330+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:05.505500+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:06.505742+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:07.505924+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:08.506076+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:09.506157+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:10.506370+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:11.506497+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:12.506667+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:13.506802+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:14.506956+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:15.507045+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:16.507233+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:17.507320+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:18.507445+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:19.507526+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:20.507658+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:21.527305+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:22.527462+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:23.527669+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:24.527841+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:25.528098+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:26.528343+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:27.528571+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:28.528812+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:29.529045+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:30.529293+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:31.529475+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:32.529625+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:33.529778+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:34.529950+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:35.530091+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:36.530244+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:37.530935+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:38.531596+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:39.531746+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:40.531899+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:41.532074+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:42.532412+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:43.532538+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:44.532653+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:45.532805+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:46.532946+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:47.533083+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:48.533230+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:49.533379+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:50.533528+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:51.533684+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:52.533826+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:53.534113+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:54.534272+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:55.534440+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:56.534631+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:57.534775+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:58.535032+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:59.535228+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:00.535430+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:01.535558+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:02.535722+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:03.536225+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:04.536380+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:05.536604+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:06.536752+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:07.536950+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:08.537115+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:09.537287+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:10.537454+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:11.537617+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:12.537733+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:13.537844+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:14.537997+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:15.538130+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:16.538260+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:17.538400+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:18.538497+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:19.538659+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:20.538899+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:21.539110+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:22.539287+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:23.539470+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:24.539660+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:25.539961+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:26.540258+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:27.540588+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:28.540827+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:29.541061+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:30.541255+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:31.541363+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:32.541508+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:33.541682+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:34.541849+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:35.541993+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:36.542151+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:37.542265+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:38.542414+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:39.543326+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:40.543498+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:41.543632+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:42.544119+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:43.545170+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:44.546349+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:45.546629+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:46.546797+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:47.547002+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:48.547430+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:49.547738+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:50.548089+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:51.548261+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:52.548850+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:53.549033+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:54.549498+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:55.549701+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 16.17 MB, 0.03 MB/s
                                           Interval WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:56.550051+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:57.550457+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:58.550718+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:59.550927+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:00.551159+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:01.551392+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:02.551547+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:03.551684+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:04.551825+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:05.552014+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:06.552203+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:07.552471+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:08.552738+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:09.552949+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:10.553106+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:11.553319+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:12.553781+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:13.554236+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:14.554735+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:15.554884+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:16.555062+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:17.555211+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:18.555359+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:19.555563+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:20.555810+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:21.555989+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:22.556167+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:23.556363+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:24.556637+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:25.556835+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:26.557042+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:27.557371+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:28.557641+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:29.557825+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:30.558133+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:31.558383+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:32.558582+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:33.558848+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:34.558988+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:35.559235+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:36.559440+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:37.559671+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:38.559858+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:39.560033+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:40.560240+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:41.560472+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:42.560621+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:43.560759+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:44.560889+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:45.561031+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:46.561267+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:47.561395+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:48.561561+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:49.561749+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:50.561969+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:51.562120+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:52.562277+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:53.562483+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:54.562619+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:55.562749+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:56.562900+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:57.563133+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:58.563281+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:59.563423+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:00.563633+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:01.563828+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:02.564037+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:03.564236+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:04.564386+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:05.564539+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:06.564695+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:07.564843+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:08.564962+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:09.565083+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:10.565231+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:11.565358+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:12.565487+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:13.565653+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:14.565786+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:15.565940+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:16.566095+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:17.566262+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:18.566390+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:19.566502+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:20.566670+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:21.566828+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:22.566992+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:23.567116+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:24.567232+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:25.567378+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:26.568028+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:27.568174+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:28.568352+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:29.568469+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:30.568616+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:31.568796+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:32.568925+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:33.569044+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:34.569216+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:35.569339+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:36.569476+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:37.569608+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:38.569865+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:39.570321+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:40.570567+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:41.570724+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:42.570844+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:43.571088+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:44.571325+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:45.571546+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:46.571819+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:47.572044+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:48.573131+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:49.573364+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:50.573909+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:51.574098+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:52.574338+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:53.574555+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:54.574726+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:55.574926+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:56.575216+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:57.575387+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:58.575567+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:59.575731+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:00.575954+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:01.576137+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:02.576377+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:03.576544+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:04.576701+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:05.576848+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:06.577006+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:07.577216+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:08.577440+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:09.577559+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:10.577742+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:11.577901+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:12.578039+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:13.578210+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:14.578331+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:15.578472+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:16.578693+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:17.578846+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:18.578970+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:19.579110+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:20.579270+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:21.579416+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:22.579670+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:23.579786+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:24.579872+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:25.579995+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:26.580144+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:27.580320+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:28.580444+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:29.580570+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:30.580725+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:31.580879+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:32.581055+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:33.581221+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:34.581367+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:35.581519+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:36.581620+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:37.581731+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:38.581837+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:39.581992+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:40.582138+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:41.582245+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:42.582390+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:43.582507+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:44.582704+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:45.582830+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:46.582963+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:47.583085+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:48.583275+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:49.583406+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:50.583566+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:51.583671+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:52.583798+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:53.583919+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:54.584053+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:55.584166+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:56.584297+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:57.584425+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:58.584544+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:59.584681+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:00.585317+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:01.585511+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:02.585670+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:03.585810+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:04.585938+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:05.586070+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:06.586188+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:07.586307+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:08.586440+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:09.586566+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:10.586772+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:11.586911+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:12.587045+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:13.587217+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:14.587374+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:15.587495+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:16.587647+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:17.587763+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:18.587907+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:19.588036+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:20.588215+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:21.588355+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:22.588491+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:23.588620+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:24.588766+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:25.588906+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:26.589044+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:27.589173+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:28.589317+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:29.589458+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:30.589655+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:31.589761+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:32.590063+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:33.590269+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:34.590412+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:35.590534+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:36.590651+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:37.590796+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:38.590936+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:39.591062+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:40.591256+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:41.591494+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:42.591739+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:43.591913+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:44.592216+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:45.592411+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:46.592550+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:47.592698+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:48.592839+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:49.593017+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:50.593225+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:51.593362+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:52.593505+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:53.593639+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:54.593834+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:55.594036+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:56.594432+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:57.594812+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:58.595227+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:59.596011+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:00.596267+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:01.596435+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:02.596600+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:03.597421+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:04.597562+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:05.597687+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:06.597859+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:07.598017+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:08.598268+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:09.598459+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:10.598670+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:11.598874+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:12.599017+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:13.599141+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:14.599328+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:15.599473+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:16.599599+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:17.599719+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:18.599941+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:19.600140+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:20.600438+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:21.600587+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:22.600718+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:23.600929+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:24.601145+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:25.601441+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:26.601644+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:27.601834+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:28.602089+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:29.602270+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:30.602458+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:31.602958+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:32.603270+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:33.603428+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:34.603774+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:35.603994+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:36.604101+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:37.604320+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:38.604455+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:39.604642+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:40.604875+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:41.605003+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:42.605119+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:43.605248+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:44.605481+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:45.605684+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:46.605962+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:47.606102+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:48.606235+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:49.606414+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:50.606598+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:51.606711+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:52.606835+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:53.606977+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:54.607114+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:55.607250+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:56.607307+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:57.607444+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:58.607674+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:59.607823+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:00.607990+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:01.608145+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:02.608512+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: mgrc ms_handle_reset ms_handle_reset con 0x55b346270000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 20:55:55 compute-0 ceph-osd[88745]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: get_auth_request con 0x55b346b03000 auth_method 0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: mgrc handle_mgr_configure stats_period=5
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:03.608654+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:04.608974+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:05.609118+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:06.609392+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:07.609709+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:08.609841+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:09.610058+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:10.610247+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:11.610393+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:12.610513+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:13.610680+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:14.610814+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:15.611129+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:16.611371+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:17.611700+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:18.612042+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:19.612265+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:20.612409+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:21.612615+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:22.612738+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:23.612901+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:24.803503+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:25.803817+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:26.803957+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:27.804174+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:28.804354+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:29.804477+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:30.804624+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:31.804932+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:32.805261+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:33.805520+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:34.805652+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:35.805985+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:36.806171+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:37.806331+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:38.806602+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:39.806757+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:40.806926+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:41.807064+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:42.807238+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:43.807370+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:44.807571+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:45.807883+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:46.808013+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:47.808373+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:48.808616+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:49.808830+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:50.809056+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:51.809249+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:52.809364+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:53.809631+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:54.809844+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:55.809956+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:56.810064+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:57.810189+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:58.810290+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:59.810419+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:00.810609+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:01.810734+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:02.810881+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:03.811042+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:04.811307+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:05.811483+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:06.811708+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:07.811898+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:08.812076+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:09.812230+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:10.812391+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:11.812640+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:12.812804+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:13.813032+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:14.813207+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:15.813339+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:16.813476+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:17.813638+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:18.813795+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:19.813922+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:20.814058+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:21.814279+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:22.814423+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:23.814578+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:24.814717+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:25.814847+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:26.814973+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:27.815093+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:28.815220+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:29.815338+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:30.815469+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:31.815620+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:32.815749+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:33.815955+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:34.816096+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:35.816267+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:36.816387+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:37.816504+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:38.816723+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:39.816904+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:40.817079+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:41.817243+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:42.817407+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:43.817538+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:44.817681+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:45.817805+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:46.817993+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:47.818146+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:48.818335+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:49.818529+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:50.818770+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:51.818976+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:52.819155+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:53.819344+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:54.819569+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:55.819733+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:56.819914+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:57.820093+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:58.820273+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:59.820433+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:00.820635+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:01.820900+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:02.821053+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:03.821220+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:04.821356+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:05.821666+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:06.821925+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:07.822134+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:08.822269+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:09.823413+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:10.824282+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:11.825022+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:12.825574+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:13.825955+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:14.826222+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:15.826352+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:16.826477+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:17.827519+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:18.827720+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:19.827920+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:20.828136+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:21.828565+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:22.828700+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:23.829257+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:24.829450+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:25.830042+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:26.830504+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:27.830674+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:28.830843+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:29.831127+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:30.834362+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:31.834583+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:32.834786+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:33.835397+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:34.835902+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:35.836257+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:36.836586+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:37.836828+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:38.837267+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:39.837669+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:40.838033+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:41.838254+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:42.838396+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:43.838546+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:44.838738+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:45.838879+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:46.839055+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:47.839163+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:48.839392+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:49.839516+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:50.839664+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:51.839856+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:52.839969+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:53.840104+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:54.840265+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:55.840368+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:56.840521+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:57.840669+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:58.840824+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:59.840961+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:00.841155+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:01.841294+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:02.841450+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:03.841603+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:04.841691+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:05.841862+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:06.842002+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:07.842147+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:08.842307+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:09.842445+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:10.842605+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:11.842785+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:12.842951+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:13.843089+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:14.843258+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:15.843447+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:16.843638+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:17.843864+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:18.844051+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:19.844201+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:20.844512+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:21.844681+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:22.844823+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:23.845005+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:24.845122+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:25.845241+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:26.845403+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:27.845539+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:28.845716+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:29.845885+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:30.846078+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:31.846261+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:32.846422+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:33.846552+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:34.846742+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:35.846910+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:36.847100+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:37.847286+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:38.847470+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:39.847647+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:40.847838+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:41.847995+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:42.848255+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:43.848439+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:44.848582+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:45.848744+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:46.848830+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:47.848967+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:48.849153+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:49.849306+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:50.849499+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:51.849622+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:52.849730+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:53.849873+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:54.850041+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:55.850272+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:56.850462+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:57.850633+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:58.850817+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:59.850988+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:00.851153+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:01.851247+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:02.851387+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:03.851538+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:04.851669+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:05.851788+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:06.851900+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:07.852045+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:08.852234+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:09.852413+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:10.852611+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:11.852775+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:12.852932+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:13.853106+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:14.853252+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:15.853389+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:16.853530+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:17.853661+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:18.853881+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:19.854021+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:20.854175+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:21.854347+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:22.854510+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:23.854642+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000016s
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:24.854789+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:25.855017+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:26.855161+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:27.855369+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:28.855594+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:29.855851+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:30.856019+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:31.856162+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:32.856318+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:33.856474+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:34.856637+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:35.856774+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:36.856895+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:37.857051+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:38.857172+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:39.857470+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:40.857727+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:41.857881+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:42.858027+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:43.858230+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:44.858431+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:45.858604+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:46.858766+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:47.858902+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:48.859055+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:49.859241+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:50.859392+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:51.859522+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:52.859697+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:53.859946+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:54.860095+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:55.860227+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:56.860359+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:57.860536+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:58.860704+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:59.860823+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:00.860985+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:01.861127+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:02.861249+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:03.861500+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:04.861708+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:05.861827+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1000.782714844s of 1000.789855957s, submitted: 4
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:06.861953+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 17154048 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:07.862149+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 17154048 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:08.862318+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 16277504 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:09.862510+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd916000/0x0/0x4ffc00000, data 0x84b706/0x8b6000, compress 0x0/0x0/0x0, omap 0x7804, meta 0x1a287fc), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 67 ms_handle_reset con 0x55b346dbec00 session 0x55b346db08c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 16244736 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:10.862699+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546336 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd911000/0x0/0x4ffc00000, data 0x84ccef/0x8b9000, compress 0x0/0x0/0x0, omap 0x7ba2, meta 0x1a2845e), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 16244736 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483eb800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:11.862830+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 16089088 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd911000/0x0/0x4ffc00000, data 0x84ccef/0x8b9000, compress 0x0/0x0/0x0, omap 0x7c58, meta 0x1a283a8), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:12.862979+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 ms_handle_reset con 0x55b3483eb800 session 0x55b347b12540
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:13.863258+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:14.863939+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:15.864920+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:16.865120+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:17.865255+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:18.865434+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:19.865603+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:20.865835+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:21.865990+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:22.866172+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:23.866405+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:24.866594+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:26.275821+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:27.276038+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:28.276256+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:29.276490+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:30.276699+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:31.276940+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:32.277132+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:33.277336+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:34.277486+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:35.277694+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:36.277834+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:37.277976+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:38.278094+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:39.278441+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:40.278925+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:41.279085+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:42.279245+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:43.279400+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:44.279537+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:45.279672+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:46.279836+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483eb400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.820827484s of 40.443630219s, submitted: 51
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 16023552 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 69 ms_handle_reset con 0x55b3483eb400 session 0x55b3481a01c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:47.280113+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 16023552 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:48.280332+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 16023552 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:49.280523+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 24199168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:50.280664+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 24199168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b3483ebc00 session 0x55b345fbddc0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b3468da000 session 0x55b3481b8000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604376 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fd104000/0x0/0x4ffc00000, data 0x1050f0a/0x10c6000, compress 0x0/0x0/0x0, omap 0x8976, meta 0x1a2768a), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:51.280911+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 24199168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b3468da000 session 0x55b345fbd880
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b346dbec00 session 0x55b346d6a1c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 71 ms_handle_reset con 0x55b3468da800 session 0x55b345f5ba40
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:52.281085+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 23912448 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468db000 session 0x55b3481cdc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468dac00 session 0x55b3445f4380
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:53.281275+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468da000 session 0x55b345fbc700
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468da800 session 0x55b345fbcfc0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 23969792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:54.281794+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 23969792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 73 ms_handle_reset con 0x55b3468db000 session 0x55b345fbc540
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:55.282124+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 23896064 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 572149 data_alloc: 218103808 data_used: 1251
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 74 ms_handle_reset con 0x55b346dbec00 session 0x55b347b128c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:56.282328+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.722087860s of 10.010193825s, submitted: 149
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 23535616 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 75 ms_handle_reset con 0x55b3468db400 session 0x55b3481e7a40
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd8fb000/0x0/0x4ffc00000, data 0x856722/0x8ce000, compress 0x0/0x0/0x0, omap 0x99f7, meta 0x1a26609), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:57.282666+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 23568384 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:58.282931+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 23576576 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 75 ms_handle_reset con 0x55b3468da000 session 0x55b3481b8a80
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:59.283094+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 23543808 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 77 ms_handle_reset con 0x55b3468da800 session 0x55b346037c00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:00.283240+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 23732224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 584510 data_alloc: 218103808 data_used: 5312
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fd8f1000/0x0/0x4ffc00000, data 0x85a925/0x8d7000, compress 0x0/0x0/0x0, omap 0xa69a, meta 0x1a25966), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:01.283567+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 23732224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:02.283860+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 23691264 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:03.284045+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 23683072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 78 ms_handle_reset con 0x55b3468db000 session 0x55b346da4540
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:04.284246+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 23683072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd8eb000/0x0/0x4ffc00000, data 0x85d3da/0x8dd000, compress 0x0/0x0/0x0, omap 0xabc3, meta 0x1a2543d), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 79 ms_handle_reset con 0x55b3468dd800 session 0x55b3481ccfc0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:05.284393+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 79 ms_handle_reset con 0x55b346dbec00 session 0x55b3481e61c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 80 ms_handle_reset con 0x55b3468da000 session 0x55b346cc7a40
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 22429696 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 597803 data_alloc: 218103808 data_used: 5897
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:06.284581+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 81 ms_handle_reset con 0x55b3468dd400 session 0x55b346db0700
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fd8e1000/0x0/0x4ffc00000, data 0x86155c/0x8e7000, compress 0x0/0x0/0x0, omap 0xb3b6, meta 0x1a24c4a), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 22364160 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.186422348s of 10.314285278s, submitted: 71
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fd8e1000/0x0/0x4ffc00000, data 0x86155c/0x8e7000, compress 0x0/0x0/0x0, omap 0xb3b6, meta 0x1a24c4a), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:07.284761+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 82 ms_handle_reset con 0x55b3468dd800 session 0x55b346cc6c40
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 22315008 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 82 ms_handle_reset con 0x55b3468db800 session 0x55b346cc6380
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:08.284964+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 22315008 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:09.285144+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 84 ms_handle_reset con 0x55b3468da400 session 0x55b3481a1a40
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:10.285291+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 620389 data_alloc: 218103808 data_used: 5897
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:11.285467+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 85 ms_handle_reset con 0x55b3468da000 session 0x55b347a8aa80
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:12.285615+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fd8d1000/0x0/0x4ffc00000, data 0x868324/0x8f9000, compress 0x0/0x0/0x0, omap 0xc413, meta 0x1a23bed), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 85 handle_osd_map epochs [85,86], i have 86, src has [1,86]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:13.285782+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 21037056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:14.285992+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 21020672 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fd8cc000/0x0/0x4ffc00000, data 0x8697d4/0x8fc000, compress 0x0/0x0/0x0, omap 0xc6bc, meta 0x1a23944), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:15.286124+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 87 ms_handle_reset con 0x55b3468da400 session 0x55b3481e6e00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 20963328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 628999 data_alloc: 218103808 data_used: 5897
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:16.286387+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 88 ms_handle_reset con 0x55b3468db800 session 0x55b3481cca80
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 88 ms_handle_reset con 0x55b3468dd400 session 0x55b346db1180
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 20701184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.035199165s of 10.521741867s, submitted: 115
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:17.286554+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 89 ms_handle_reset con 0x55b3468dac00 session 0x55b3481cd880
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 20275200 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:18.286760+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 20193280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:19.286919+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 19988480 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 90 ms_handle_reset con 0x55b3468dac00 session 0x55b346d6a540
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:20.287169+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 19849216 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fd8a5000/0x0/0x4ffc00000, data 0x891dd1/0x927000, compress 0x0/0x0/0x0, omap 0xd692, meta 0x1a2296e), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 640844 data_alloc: 218103808 data_used: 16102
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 91 ms_handle_reset con 0x55b3468da400 session 0x55b345f5b500
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:21.287862+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 19816448 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 91 ms_handle_reset con 0x55b3468da000 session 0x55b3481fce00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:22.288052+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 19726336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 92 ms_handle_reset con 0x55b3468db800 session 0x55b346da4700
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 92 ms_handle_reset con 0x55b3468dd400 session 0x55b3482116c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:23.288266+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fc6f9000/0x0/0x4ffc00000, data 0x894d0a/0x92f000, compress 0x0/0x0/0x0, omap 0xe0c5, meta 0x2bc1f3b), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 19644416 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:24.288420+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 93 ms_handle_reset con 0x55b3468da000 session 0x55b346cc61c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19603456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fc6f9000/0x0/0x4ffc00000, data 0x894d0a/0x92f000, compress 0x0/0x0/0x0, omap 0xe290, meta 0x2bc1d70), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 93 ms_handle_reset con 0x55b3468da400 session 0x55b3481e6c40
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:25.288567+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19611648 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 94 ms_handle_reset con 0x55b3468dac00 session 0x55b346d6a540
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 650615 data_alloc: 218103808 data_used: 20193
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:26.288757+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19587072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.253656387s of 10.031966209s, submitted: 178
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:27.288888+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 95 ms_handle_reset con 0x55b3468db800 session 0x55b346d6afc0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 19546112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 96 ms_handle_reset con 0x55b3468da800 session 0x55b3460361c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:28.289011+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 18399232 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:29.289331+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 18399232 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc6e8000/0x0/0x4ffc00000, data 0x89a4c9/0x93c000, compress 0x0/0x0/0x0, omap 0xee55, meta 0x2bc11ab), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:30.289443+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 18399232 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc6e8000/0x0/0x4ffc00000, data 0x89a4c9/0x93c000, compress 0x0/0x0/0x0, omap 0xee55, meta 0x2bc11ab), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664249 data_alloc: 218103808 data_used: 20163
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:31.289587+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da800 session 0x55b347eaa700
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da000 session 0x55b3481a1340
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468db800 session 0x55b3481e7c00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 18391040 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346aa8800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b346aa8800 session 0x55b346da4e00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dc800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468dc800 session 0x55b346da5500
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3483ebc00 session 0x55b3481cd500
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da000 session 0x55b345fbdc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da800 session 0x55b346d6a700
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468db800 session 0x55b346d6a1c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dc800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468dc800 session 0x55b346da4a80
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:32.289695+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 18391040 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 97 handle_osd_map epochs [97,98], i have 98, src has [1,98]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:33.289838+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 18333696 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 98 ms_handle_reset con 0x55b3468da000 session 0x55b3481cce00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:34.289971+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x89b9dd/0x940000, compress 0x0/0x0/0x0, omap 0xf1de, meta 0x2bc0e22), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:35.290100+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667121 data_alloc: 218103808 data_used: 20217
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:36.290231+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:37.290359+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:38.290467+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124558449s of 11.240477562s, submitted: 51
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 99 ms_handle_reset con 0x55b3483ebc00 session 0x55b347b128c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346aa8800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 18292736 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x89b9dd/0x940000, compress 0x0/0x0/0x0, omap 0xf1de, meta 0x2bc0e22), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:39.290632+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b346aa8800 session 0x55b3481e76c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbfc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b346dbfc00 session 0x55b348210380
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b3468dcc00 session 0x55b346d6a8c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 17965056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b3468dcc00 session 0x55b347a8b340
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89e45a/0x946000, compress 0x0/0x0/0x0, omap 0xf794, meta 0x2bc086c), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:40.290754+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b3468da000 session 0x55b3481a0a80
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346aa8800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 17965056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 678178 data_alloc: 218103808 data_used: 20233
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:41.290910+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b346aa8800 session 0x55b3481cc1c0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:42.291048+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbfc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b346dbfc00 session 0x55b346db1c00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b3483ebc00 session 0x55b346036380
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:43.291173+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b3483ebc00 session 0x55b346da4fc0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:44.291357+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b3468da000 session 0x55b3481a0540
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:45.291517+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fc6df000/0x0/0x4ffc00000, data 0x8a1096/0x94c000, compress 0x0/0x0/0x0, omap 0xfd61, meta 0x2bc029f), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 17866752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 103 ms_handle_reset con 0x55b3468dcc00 session 0x55b348210e00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:46.291641+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 682643 data_alloc: 218103808 data_used: 20268
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 17842176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:47.291849+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 103 ms_handle_reset con 0x55b3468da800 session 0x55b347eaae00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 103 ms_handle_reset con 0x55b3468db800 session 0x55b3481fd340
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fc6da000/0x0/0x4ffc00000, data 0x8a2691/0x94e000, compress 0x0/0x0/0x0, omap 0x10093, meta 0x2bbff6d), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:48.292074+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 17932288 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc6da000/0x0/0x4ffc00000, data 0x8a3b4d/0x950000, compress 0x0/0x0/0x0, omap 0x103c5, meta 0x2bbfc3b), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:49.292262+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.364825249s of 10.770331383s, submitted: 84
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 105 ms_handle_reset con 0x55b3468da000 session 0x55b3481cddc0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:50.292414+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:51.293011+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687103 data_alloc: 218103808 data_used: 24212
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:52.293172+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc6d6000/0x0/0x4ffc00000, data 0x8a514b/0x952000, compress 0x0/0x0/0x0, omap 0x1066f, meta 0x2bbf991), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:53.293354+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:54.293527+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468dd800 session 0x55b345f5bc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468db000 session 0x55b3481e6000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 17907712 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468db800 session 0x55b346d6ac40
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:55.293688+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 17891328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468dcc00 session 0x55b345efc540
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:56.293832+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 686344 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 17989632 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fc6fc000/0x0/0x4ffc00000, data 0x8825f4/0x930000, compress 0x0/0x0/0x0, omap 0x10a89, meta 0x2bbf577), peers [0,1] op hist [1])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 107 ms_handle_reset con 0x55b3468da000 session 0x55b347eab500
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:57.293975+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:58.294108+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:59.294255+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:00.294436+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:01.294613+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692853 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:02.294849+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6f2000/0x0/0x4ffc00000, data 0x8850f1/0x936000, compress 0x0/0x0/0x0, omap 0x11068, meta 0x2bbef98), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:03.294986+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 108 handle_osd_map epochs [108,109], i have 109, src has [1,109]
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.772667885s of 14.188973427s, submitted: 90
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 18014208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:04.295123+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 18014208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:05.295261+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 18014208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:06.295405+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:07.295521+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:08.295691+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:09.295795+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:10.295894+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:11.296046+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:12.296212+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:13.296574+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:14.296696+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:15.296818+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:16.296926+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:17.297074+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:18.297267+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:19.297439+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:20.297621+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:21.297836+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:22.297979+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:23.298112+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:24.298289+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:25.298452+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:26.298600+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:27.298735+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:28.298868+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:29.299023+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:30.299166+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:31.299411+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:32.299668+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:33.299793+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:34.299980+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:35.300097+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:36.300252+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:37.300451+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:38.300636+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:39.300857+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:40.300998+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:41.301286+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:42.301521+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:43.301655+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:44.301802+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:45.301986+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:46.302140+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:47.302238+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:48.302541+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:49.302684+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:50.302897+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:51.303060+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:52.303260+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:53.303419+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:54.303610+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:55.303819+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:56.304081+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:57.304246+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:58.304418+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:59.304612+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:00.304815+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:01.305081+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:02.305260+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:03.305442+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:04.305632+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:05.305835+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:06.306074+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:07.306257+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:08.306445+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:09.306623+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:10.306897+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:11.307261+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:12.307479+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:13.307632+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:14.307796+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:15.307959+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:16.308134+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:17.308226+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:18.309541+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:19.309699+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:20.309867+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 17932288 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:21.310067+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:55:55 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:55:55 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'config diff' '{prefix=config diff}'
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'config show' '{prefix=config show}'
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:22.310252+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 17563648 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:23.310444+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 17498112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:24.310595+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 17498112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 20:55:55 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 20:55:55 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:25.310801+0000)
Dec 01 20:55:55 compute-0 ceph-osd[88745]: do_command 'log dump' '{prefix=log dump}'
Dec 01 20:55:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 01 20:55:56 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975546091' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 20:55:56 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:56 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 01 20:55:56 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412988698' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 20:55:56 compute-0 ceph-mon[75880]: from='client.14746 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:56 compute-0 ceph-mon[75880]: pgmap v831: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:56 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/76264320' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 01 20:55:56 compute-0 ceph-mon[75880]: from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:56 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/975546091' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 20:55:56 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1412988698' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 20:55:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:56 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:57 compute-0 podman[252416]: 2025-12-01 20:55:57.022059282 +0000 UTC m=+0.113197928 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 01 20:55:57 compute-0 crontab[252494]: (root) LIST (root)
Dec 01 20:55:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 01 20:55:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008022919' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 20:55:57 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:57 compute-0 ceph-mon[75880]: from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:57 compute-0 ceph-mon[75880]: pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:57 compute-0 ceph-mon[75880]: from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:57 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/4008022919' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 20:55:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 01 20:55:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415510446' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:55:57 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:57 compute-0 nova_compute[244568]: 2025-12-01 20:55:57.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:55:58 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14768 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:55:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 01 20:55:58 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367706567' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 20:55:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:55:58 compute-0 ceph-mon[75880]: from='client.14760 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:58 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1415510446' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 20:55:58 compute-0 ceph-mon[75880]: from='client.14764 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:55:58 compute-0 ceph-mon[75880]: from='client.14768 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:55:58 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/367706567' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 20:55:58 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14770 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:55:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:58 compute-0 nova_compute[244568]: 2025-12-01 20:55:58.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:55:58 compute-0 nova_compute[244568]: 2025-12-01 20:55:58.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:55:58 compute-0 nova_compute[244568]: 2025-12-01 20:55:58.958 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:55:59 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 01 20:55:59 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1137714309' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 01 20:55:59 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14774 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:55:59 compute-0 ceph-mon[75880]: from='client.14770 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:55:59 compute-0 ceph-mon[75880]: pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:55:59 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1137714309' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 01 20:55:59 compute-0 ceph-mon[75880]: from='client.14774 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:55:59 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14778 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:55:59 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: 2025-12-01T20:55:59.901+0000 7f311224f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 20:55:59 compute-0 ceph-mgr[76174]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 20:55:59 compute-0 nova_compute[244568]: 2025-12-01 20:55:59.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:56:00 compute-0 podman[252822]: 2025-12-01 20:56:00.089947695 +0000 UTC m=+0.051135774 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:56:00 compute-0 podman[252824]: 2025-12-01 20:56:00.149649174 +0000 UTC m=+0.110849203 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 01 20:56:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 01 20:56:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1392318775' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 01 20:56:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 01 20:56:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2235455189' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 01 20:56:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 01 20:56:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915866832' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006168 3 0.000148
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006034 3 0.000175
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005963 3 0.001724
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005815 3 0.000432
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/25 les/c/f=41/26/0 sis=40) [1] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:12.319063+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:41.484265+0000 osd.1 (osd.1) 8 : cluster [DBG] 3.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:41.494775+0000 osd.1 (osd.1) 9 : cluster [DBG] 3.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 2777088 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:13.319455+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 41 heartbeat osd_stat(store_statfs(0x4fe0ed000/0x0/0x4ffc00000, data 0x9946f/0xd9000, compress 0x0/0x0/0x0, omap 0x5019, meta 0x1a2afe7), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 9)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:41.484265+0000 osd.1 (osd.1) 8 : cluster [DBG] 3.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:41.494775+0000 osd.1 (osd.1) 9 : cluster [DBG] 3.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 2719744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:14.319588+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:43.480102+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:43.490670+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 11)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:43.480102+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:43.490670+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 2719744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:15.319739+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 2719744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:16.319864+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 336398 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 60219392 unmapped: 2686976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:17.320045+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 60219392 unmapped: 2686976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:18.320220+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 41 heartbeat osd_stat(store_statfs(0x4fe0f3000/0x0/0x4ffc00000, data 0x9946f/0xd9000, compress 0x0/0x0/0x0, omap 0x5019, meta 0x1a2afe7), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 60227584 unmapped: 2678784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:19.320303+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 41 handle_osd_map epochs [42,42], i have 41, src has [1,42]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.322652817s of 15.255033493s, submitted: 209
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000095 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000052
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000089 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000044
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000128 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000033
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000168 1 0.000047
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000029
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000013
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000026
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000013
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000039
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000068 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000023
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000170 1 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000034
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000021 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000044
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000023
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000066 1 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000204 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000028 1 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000195 1 0.000052
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000067 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000047
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000055 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000015
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000033
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.268964 1 0.000045
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.272701 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.272815 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.419612 11 0.000104
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.459739 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.272902 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.459896 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.459962 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730834007s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.009506226s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580264091s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858955383s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730809212s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009506226s@ mbc={}] exit Reset 0.000051 1 0.000102
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730809212s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009506226s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730809212s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009506226s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730809212s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009506226s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580239296s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858955383s@ mbc={}] exit Reset 0.000050 1 0.000077
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580239296s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858955383s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730809212s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009506226s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580239296s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858955383s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730809212s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009506226s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580239296s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858955383s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580239296s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858955383s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.580239296s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858955383s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.420006 11 0.000083
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.460180 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.460264 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.460316 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579126358s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858009338s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579100609s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] exit Reset 0.000076 1 0.000100
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579100609s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579100609s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579100609s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579100609s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579100609s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.420088 11 0.000065
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.460355 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.460471 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.460508 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579756737s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858795166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579740524s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858795166s@ mbc={}] exit Reset 0.000033 1 0.000054
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579740524s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858795166s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.419653 11 0.000117
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579740524s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858795166s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579740524s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858795166s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.460247 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579740524s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858795166s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.460338 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579740524s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858795166s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.460365 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579789162s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858909607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266792 1 0.000044
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273032 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273082 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273104 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579769135s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858909607s@ mbc={}] exit Reset 0.000064 1 0.000539
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579769135s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858909607s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579769135s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858909607s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579769135s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858909607s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579769135s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858909607s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.579769135s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858909607s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732884407s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012062073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.420417 11 0.000126
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.460827 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.460963 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732856750s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012062073s@ mbc={}] exit Reset 0.000054 1 0.000083
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.420511 11 0.000101
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732856750s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012062073s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.460963 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732856750s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012062073s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.461018 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732856750s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012062073s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732856750s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012062073s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732856750s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012062073s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266499 1 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.461029 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.272958 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273028 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.461066 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273053 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578691483s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.857963562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732902527s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012191772s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578671455s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857963562s@ mbc={}] exit Reset 0.000041 1 0.000096
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578671455s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857963562s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578685760s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.857986450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578671455s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857963562s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578671455s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857963562s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578668594s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857986450s@ mbc={}] exit Reset 0.000048 1 0.000094
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578671455s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857963562s@ mbc={}] exit Start 0.000012 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578668594s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857986450s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578671455s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857963562s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578668594s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857986450s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578668594s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857986450s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.269663 1 0.000072
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578668594s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857986450s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273347 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578668594s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.857986450s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273424 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273447 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730150223s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.009513855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.420780 11 0.000198
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.461181 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730113029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009513855s@ mbc={}] exit Reset 0.000052 1 0.000072
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.461261 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730113029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009513855s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730113029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009513855s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.461287 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730113029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009513855s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.428903 11 0.000068
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730113029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009513855s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.461299 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730113029s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.009513855s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.461374 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578577042s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.858009338s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.461401 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578563690s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] exit Reset 0.000027 1 0.000074
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578563690s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578563690s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570871353s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850326538s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578563690s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578563690s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.578563690s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.858009338s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570859909s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] exit Reset 0.000036 1 0.000045
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266578 1 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570859909s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.272977 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570859909s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570859909s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273048 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570859909s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570859909s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266547 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273086 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.272958 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273029 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273060 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732740402s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012268066s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732756615s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012290955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732726097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012268066s@ mbc={}] exit Reset 0.000031 1 0.000060
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.428984 11 0.000064
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732745171s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012290955s@ mbc={}] exit Reset 0.000023 1 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.461423 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732745171s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012290955s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732726097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012268066s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.461582 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732745171s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012290955s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732745171s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012290955s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732726097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012268066s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732745171s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012290955s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.461617 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732726097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012268066s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732745171s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012290955s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732726097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012268066s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732726097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012268066s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570790291s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850372314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266610 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273019 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273101 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570778847s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850372314s@ mbc={}] exit Reset 0.000023 1 0.000045
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570778847s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850372314s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570778847s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850372314s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570778847s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850372314s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570778847s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850372314s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570778847s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850372314s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266696 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273083 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273147 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273176 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273173 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266597 1 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.272963 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273042 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732734680s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012397766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732617378s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012283325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266663 1 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273075 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.272999 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273043 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732597351s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012283325s@ mbc={}] exit Reset 0.000038 1 0.000063
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273061 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732597351s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012283325s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732597351s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012283325s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732597351s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012283325s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732597351s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012283325s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732597351s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012283325s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732756615s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012458801s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732712746s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012428284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732659340s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012397766s@ mbc={}] exit Reset 0.000098 1 0.000159
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732659340s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012397766s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732659340s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012397766s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732659340s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012397766s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732659340s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012397766s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732659340s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012397766s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.429548 11 0.000068
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.462056 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.462114 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.462150 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570277214s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850196838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570260048s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] exit Reset 0.000032 1 0.000056
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570260048s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570260048s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570260048s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570260048s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.570260048s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.266932 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273205 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273257 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273277 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732433319s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012466431s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732419014s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012466431s@ mbc={}] exit Reset 0.000030 1 0.000056
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732419014s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012466431s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732419014s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012466431s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732419014s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012466431s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732419014s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012466431s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732419014s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012466431s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732224464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012458801s@ mbc={}] exit Reset 0.000562 1 0.000581
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732224464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012458801s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732224464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012458801s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732224464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012458801s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732166290s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012428284s@ mbc={}] exit Reset 0.000565 1 0.000584
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732224464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012458801s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732224464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012458801s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732166290s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012428284s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732166290s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012428284s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732166290s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012428284s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732166290s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012428284s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732166290s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012428284s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.429875 11 0.000145
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.462470 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.462568 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.462606 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.429915 11 0.000062
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569802284s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850204468s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.462405 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.462559 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.462614 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569784164s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850204468s@ mbc={}] exit Reset 0.000058 1 0.000084
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569784164s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850204468s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569784164s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850204468s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569784164s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850204468s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569784164s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850204468s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569784164s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850204468s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569887161s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850326538s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569872856s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] exit Reset 0.000031 1 0.000075
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569872856s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569872856s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569872856s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569872856s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569872856s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850326538s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.267423 1 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273664 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.430577 11 0.000104
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273710 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.462836 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273733 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.462902 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731929779s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012481689s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.462937 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731917381s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012481689s@ mbc={}] exit Reset 0.000025 1 0.000044
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731917381s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012481689s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731917381s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012481689s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731917381s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012481689s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731917381s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012481689s@ mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731917381s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012481689s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569235802s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849822998s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569217682s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849822998s@ mbc={}] exit Reset 0.000037 1 0.000072
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569217682s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849822998s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569217682s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849822998s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569217682s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849822998s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569217682s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849822998s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.267470 1 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273760 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273827 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273851 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731840134s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012496948s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569217682s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849822998s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732886314s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012191772s@ mbc={}] exit Reset 0.000033 1 0.000057
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.430402 11 0.000091
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.463008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.463073 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.463124 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569419861s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850196838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569405556s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] exit Reset 0.000033 1 0.000057
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569405556s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569405556s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569405556s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569405556s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.569405556s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850196838s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.267746 1 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.273920 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.273969 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.273987 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731627464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012512207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731595993s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012496948s@ mbc={}] exit Reset 0.000261 1 0.000281
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731607437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012512207s@ mbc={}] exit Reset 0.000035 1 0.000051
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731595993s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012496948s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.430935 11 0.000078
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731607437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012512207s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731607437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012512207s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.463256 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731595993s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012496948s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731607437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012512207s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.463346 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731607437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012512207s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731595993s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012496948s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731607437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012512207s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731595993s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012496948s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731595993s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012496948s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.463397 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.431138 11 0.000115
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.463445 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568803787s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849807739s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.463505 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568786621s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849807739s@ mbc={}] exit Reset 0.000037 1 0.000111
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.463538 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568786621s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849807739s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568696976s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849746704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568675995s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849746704s@ mbc={}] exit Reset 0.000041 1 0.000067
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568675995s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849746704s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568675995s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849746704s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568675995s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849746704s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568675995s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849746704s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568675995s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849746704s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.267750 1 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.274130 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.274185 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.274235 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731457710s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012634277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731442451s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012634277s@ mbc={}] exit Reset 0.000036 1 0.000220
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731442451s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012634277s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731442451s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012634277s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731442451s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012634277s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731442451s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012634277s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731442451s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012634277s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.431391 11 0.000142
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.463738 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.463805 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.463861 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568431854s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849739075s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568408012s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] exit Reset 0.000043 1 0.000068
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568408012s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.268142 1 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568408012s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.274309 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568408012s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.274349 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568408012s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568408012s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.274370 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731233597s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012611389s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731206894s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012611389s@ mbc={}] exit Reset 0.000043 1 0.000068
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731206894s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012611389s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731206894s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012611389s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731206894s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012611389s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731206894s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012611389s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731206894s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012611389s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.268102 1 0.000029
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.274307 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.274391 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.274426 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.431139 11 0.000093
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.463830 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.463953 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.464055 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731218338s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012695312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568694115s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.850189209s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731201172s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012695312s@ mbc={}] exit Reset 0.000038 1 0.000060
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731201172s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012695312s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731201172s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012695312s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568683624s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850189209s@ mbc={}] exit Reset 0.000026 1 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731201172s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012695312s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568683624s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850189209s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731201172s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012695312s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568683624s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850189209s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568683624s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850189209s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731201172s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012695312s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568683624s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850189209s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568683624s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.850189209s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.268319 1 0.000029
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.274494 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.274539 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.274557 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.731010437s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012657166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730994225s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012657166s@ mbc={}] exit Reset 0.000033 1 0.000056
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730994225s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012657166s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730994225s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012657166s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730994225s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012657166s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730994225s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012657166s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730994225s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012657166s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.431369 11 0.000695
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.464376 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.464504 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.464545 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.268430 1 0.000026
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567954063s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849739075s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.274498 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.274621 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.274644 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567939758s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] exit Reset 0.000032 1 0.000065
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567939758s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567939758s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567939758s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567939758s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567939758s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849739075s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730963707s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012786865s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012786865s@ mbc={}] exit Reset 0.000032 1 0.000050
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012786865s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012786865s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012786865s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012786865s@ mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012786865s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.432705 11 0.000126
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.464535 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.464649 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.464676 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567220688s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849105835s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.268480 1 0.000031
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.274371 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.274719 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.274738 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730884552s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 97.012809753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730873108s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012809753s@ mbc={}] exit Reset 0.000024 1 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730873108s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012809753s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730873108s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012809753s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730873108s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012809753s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730873108s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012809753s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.730873108s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012809753s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.432894 11 0.000075
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.464710 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.464794 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.464817 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567152977s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 100.849166870s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567141533s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849166870s@ mbc={}] exit Reset 0.000022 1 0.000036
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567141533s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849166870s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567141533s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849166870s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567141533s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849166870s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567141533s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849166870s@ mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567141533s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849166870s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008452 2 0.000038
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000023
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000037
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000252 1 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000007
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568786621s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849807739s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568786621s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849807739s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568786621s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849807739s@ mbc={}] exit Start 0.004072 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.568786621s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849807739s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000037
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.011507 2 0.000069
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567207336s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849105835s@ mbc={}] exit Reset 0.000057 1 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567207336s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849105835s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567207336s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849105835s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567207336s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849105835s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567207336s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849105835s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.567207336s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 100.849105835s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732886314s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012191772s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732886314s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012191772s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732886314s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012191772s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732886314s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012191772s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=8.732886314s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 97.012191772s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000094 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000034
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000086 1 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000023
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006936 2 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007097 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020413 2 0.000050
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.020314 2 0.000026
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.020189 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020123 2 0.000036
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019995 2 0.000039
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019878 2 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019907 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.019635 2 0.000039
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.019508 2 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019388 2 0.000038
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019265 2 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.019160 2 0.000023
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019051 2 0.000042
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018593 2 0.000042
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018402 2 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018293 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018198 2 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014522 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014351 2 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014004 2 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013846 2 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013722 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013015 2 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012830 2 0.000034
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013660 2 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012685 2 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012554 2 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012550 2 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012121 2 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012431 2 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010756 2 0.000038
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010629 2 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010311 2 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009870 2 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009670 2 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009421 2 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009345 2 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009964 2 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 622592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:20.320467+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 42 handle_osd_map epochs [43,43], i have 42, src has [1,43]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003433 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990514 2 0.000059
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000022 2 0.005745
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012048 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.010938 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.011701 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000102 1 0.000044
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990931 2 0.000042
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.011233 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990871 2 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011078 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990814 2 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010920 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000136 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000061
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991217 2 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011189 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991066 2 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011048 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991038 2 0.000042
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.010881 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990976 2 0.000035
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.010572 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990932 2 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010408 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000070 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000105 1 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991836 2 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011213 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992935 2 0.000034
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013564 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 43 handle_osd_map epochs [42,43], i have 43, src has [1,43]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991816 2 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010974 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991744 2 0.000029
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010572 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000089 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=0 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991813 2 0.000048
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992003 2 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010506 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.011535 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991932 2 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000103 1 0.000055
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999030 2 0.000039
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006039 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010365 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991931 2 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010223 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991238 2 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005671 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991340 2 0.000044
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005992 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991462 2 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005252 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991694 2 0.000031
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005625 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991763 2 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006054 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991917 2 0.000015
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005642 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992010 2 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005113 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991731 2 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001691 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992189 2 0.000015
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004818 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992264 2 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005172 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991997 2 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004561 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992277 2 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004893 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992466 2 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005253 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992160 2 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003034 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992289 2 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004512 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992203 2 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002591 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992260 2 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002970 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992270 2 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002021 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000916 2 0.000058
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008120 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994434 2 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004491 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994887 2 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004393 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994878 2 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004293 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007127 2 0.000051
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.006576 2 0.000058
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.005436 2 0.000039
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004755 2 0.000213
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.007965 4 0.000201
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.008027 4 0.000157
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007593 4 0.000086
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007581 4 0.000047
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.007715 4 0.000186
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007166 4 0.000061
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.007095 4 0.000100
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.007093 4 0.000096
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007240 4 0.000299
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000332 1 0.000066
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008499 4 0.000090
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012119 7 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007426 4 0.000095
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006527 4 0.000072
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006506 4 0.000130
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006057 4 0.000061
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006415 4 0.000055
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.006003 4 0.000583
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006401 4 0.000108
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006013 4 0.000062
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000028 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005842 4 0.000078
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006009 4 0.000128
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005610 4 0.000087
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005812 4 0.000156
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005986 4 0.000215
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007158 4 0.000085
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007106 4 0.000196
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006938 4 0.000092
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006867 4 0.000113
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006708 4 0.000120
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006675 4 0.000065
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006761 4 0.000108
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006584 4 0.000075
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006523 4 0.000341
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006499 4 0.000039
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006566 4 0.000110
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006593 4 0.000187
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006511 4 0.000061
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006518 4 0.000044
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006536 4 0.000042
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014953 7 0.000034
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006605 4 0.000070
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017893 7 0.000051
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018277 7 0.000055
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017706 7 0.000053
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017617 7 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015969 7 0.000040
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015721 7 0.000050
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016152 7 0.000040
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016418 7 0.000101
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017380 7 0.000047
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016838 7 0.000095
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016676 7 0.000037
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017563 7 0.000038
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017528 7 0.000033
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017192 7 0.000035
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007355 4 0.000124
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007435 4 0.000087
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007506 4 0.000122
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000043 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019049 7 0.000047
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021279 7 0.000036
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019232 7 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020475 7 0.000061
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019263 7 0.000052
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025187 7 0.000037
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025473 7 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025117 7 0.000052
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026644 7 0.000070
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025616 7 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026934 7 0.000064
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026347 7 0.000049
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027261 7 0.000061
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020081 7 0.006830
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026684 7 0.000071
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021168 7 0.004122
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026238 7 0.000035
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026986 7 0.000037
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025131 7 0.000031
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024924 7 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024600 7 0.000042
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027687 7 0.000059
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024554 7 0.000040
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020912 7 0.003662
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027271 7 0.000067
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.291602 2 0.000037
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.292071 2 0.000058
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.123888 1 0.000121
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.415990 2 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.148119 1 0.000107
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.564196 2 0.000031
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.141857 1 0.000108
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.706126 2 0.000026
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:21.320625+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.072760 1 0.000183
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.778536 2 0.000051
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066677 1 0.000103
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000035 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845621 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.842982 1 0.000039
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843035 1 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843069 1 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843101 1 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843182 1 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843220 1 0.000015
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843266 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843320 1 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843351 1 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843405 1 0.000045
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843438 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843469 1 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843512 1 0.000030
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843544 1 0.000032
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.843540 1 0.000319
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840065 1 0.000025
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840169 1 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840221 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840269 1 0.000011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840270 1 0.000045
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835105 1 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835087 1 0.000031
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835042 1 0.000037
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835072 1 0.000101
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835102 1 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835127 1 0.000136
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835162 1 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835188 1 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835204 1 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835251 1 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835285 1 0.000199
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835262 1 0.000068
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835332 1 0.000222
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835314 1 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835352 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835392 1 0.000106
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835413 1 0.000049
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835425 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835425 1 0.000065
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.835895 1 0.000809
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009691 1 0.000060
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.852716 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.870651 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.015431 1 0.000026
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.858497 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.17( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.876806 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022736 1 0.000194
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.868416 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.880561 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030007 1 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.873105 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.890846 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.037301 1 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.880443 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.896436 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044543 1 0.000050
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.887755 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.13( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.905407 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051891 1 0.000034
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.895156 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.910915 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.059470 1 0.000066
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.902804 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.3( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.919015 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066683 1 0.000024
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.910042 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.6( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.926545 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073986 1 0.000014
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.917365 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.934067 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.081345 1 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.924796 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.f( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.942240 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088740 1 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.932225 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.6( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.949151 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096206 1 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.939711 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.957260 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.103383 1 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.946926 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.9( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.964645 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110795 1 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.954364 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.971583 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118131 1 0.000016
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.961988 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1b( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.976966 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125469 1 0.000081
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.965566 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.18( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.984643 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.132759 1 0.000045
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.972957 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.994259 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140339 1 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.980654 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[3.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.999926 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.147470 1 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.987801 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.1f( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.007125 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.155180 1 0.000041
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.995545 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 43 pg[7.4( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.016079 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003386 2 0.000050
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008460 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003723 2 0.000054
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.009369 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003882 2 0.000045
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.010636 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004136 2 0.000059
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011440 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003954 3 0.000218
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003743 3 0.000103
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000083 1 0.000046
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 lc 33'19 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003951 3 0.000160
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004446 3 0.000231
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.162313 4 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.997447 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.022666 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.169581 4 0.000040
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.004699 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.030196 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.176894 4 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.011964 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.038647 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.184365 4 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.019492 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.045226 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191692 4 0.000019
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.026829 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.053795 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 384621 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.8( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.198973 4 0.000051
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.8( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.034153 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.8( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.060613 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.206318 4 0.000048
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.041526 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.068268 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.213693 4 0.000023
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.048915 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.074184 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.5( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.221053 4 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.5( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.056287 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.5( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.082548 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.228377 4 0.000022
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.063822 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.091121 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235845 4 0.000050
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.071131 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.098142 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.e( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.243325 4 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.e( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.078627 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.e( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.103623 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.15( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.250534 4 0.000018
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.15( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.085906 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.15( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.111574 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.257856 4 0.000040
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.093203 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.117831 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1c( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.265262 4 0.000021
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1c( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.100670 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1c( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.128413 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.c( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.272580 4 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.c( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.108016 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.c( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.133234 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.279932 4 0.000015
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.115372 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.139977 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.287428 4 0.000020
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.122906 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.150236 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.294688 4 0.000017
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.130185 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[3.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.151132 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.2( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.302007 4 0.000015
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.2( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.137939 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[7.2( empty lb MIN local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.163140 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.153380 3 0.000027
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.153362 3 0.000072
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000023 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010406 1 0.000173
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=43/44 n=2 ec=39/23 lis/c=43/39 les/c/f=44/41/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 1957888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:22.320767+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 1941504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:23.320961+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 44 heartbeat osd_stat(store_statfs(0x4fe0df000/0x0/0x4ffc00000, data 0x9df19/0xe7000, compress 0x0/0x0/0x0, omap 0x75ef, meta 0x1a28a11), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 44 handle_osd_map epochs [45,45], i have 44, src has [1,45]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 2.188045 4 0.000163
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 3.039568 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 2.254974 4 0.000103
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 3.041174 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 4.051767 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 4.051804 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965459824s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.293518066s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965383530s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293518066s@ mbc={}] exit Reset 0.000145 1 0.000214
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965383530s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293518066s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 4.051260 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 2.470307 4 0.000080
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 3.042305 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 4.053564 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 4.053628 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964842796s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.293350220s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 2.742882 4 0.000111
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965383530s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293518066s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 3.042999 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965383530s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293518066s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 4.054723 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 4.054763 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964750290s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293350220s@ mbc={}] exit Reset 0.000161 1 0.000270
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964750290s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293350220s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964750290s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293350220s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964750290s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293350220s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964750290s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293350220s@ mbc={}] exit Start 0.000061 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964635849s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.293281555s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964750290s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293350220s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964510918s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293281555s@ mbc={}] exit Reset 0.000171 1 0.000226
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965383530s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293518066s@ mbc={}] exit Start 0.000415 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964510918s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293281555s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.965383530s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293518066s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964510918s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293281555s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964510918s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293281555s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964510918s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293281555s@ mbc={}] exit Start 0.000071 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964510918s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.293281555s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 4.052274 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964970589s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 active pruub 105.294242859s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964728355s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.294242859s@ mbc={}] exit Reset 0.000374 1 0.001655
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964728355s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.294242859s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964728355s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.294242859s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964728355s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.294242859s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964728355s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.294242859s@ mbc={}] exit Start 0.000133 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 45 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=12.964728355s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY pruub 105.294242859s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:24.321160+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:53.747948+0000 osd.1 (osd.1) 12 : cluster [DBG] 7.1e scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:53.758511+0000 osd.1 (osd.1) 13 : cluster [DBG] 7.1e scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 13)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:53.747948+0000 osd.1 (osd.1) 12 : cluster [DBG] 7.1e scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:53.758511+0000 osd.1 (osd.1) 13 : cluster [DBG] 7.1e scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.014057 7 0.000782
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.013140 7 0.000380
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.014423 7 0.000242
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.014260 7 0.000214
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000117 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000049
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000048
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001362 2 0.000033
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.000863 2 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.012067 2 0.000137
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.012114 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000071 1 0.000099
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.123812 2 0.000198
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.123936 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.149454 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.219730 2 0.000046
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.219765 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000065 1 0.000089
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.330943 2 0.000106
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.331002 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000103 1 0.000127
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.123331 2 0.000170
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.123448 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.357608 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.413463 2 0.000058
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.413562 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000104 1 0.000168
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.093590 2 0.000291
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.093784 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.439382 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.026097 2 0.000291
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.026488 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.454826 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:25.321496+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 46 handle_osd_map epochs [46,47], i have 47, src has [1,47]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997717 2 0.000072
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 0.998754 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999170 2 0.000063
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.000718 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001775 3 0.000189
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000129 1 0.000067
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002413 3 0.000331
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.185998 3 0.000090
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.184259 3 0.000043
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000147 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.058968 1 0.000110
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0xa1fef/0xef000, compress 0x0/0x0/0x0, omap 0x8155, meta 0x1a27eab), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:26.321650+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 368387 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 47 handle_osd_map epochs [47,48], i have 47, src has [1,48]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 5.358413 13 0.000127
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 6.071763 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 7.082663 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 7.082702 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934832573s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 active pruub 105.293449402s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] exit Reset 0.000165 1 0.000248
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 5.648972 13 0.000123
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 6.073181 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 7.084147 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] exit Start 0.000015 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 7.084759 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933793068s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 active pruub 105.293289185s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] exit Reset 0.000254 1 0.000987
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] exit Start 0.000054 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 48 handle_osd_map epochs [48,48], i have 48, src has [1,48]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 48 heartbeat osd_stat(store_statfs(0x4fe0d5000/0x0/0x4ffc00000, data 0xa39b1/0xf3000, compress 0x0/0x0/0x0, omap 0x8556, meta 0x1a27aaa), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:27.321863+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:56.761858+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.1d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:56.772403+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.1d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 15)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:56.761858+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.1d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:56.772403+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.1d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.025774 7 0.000163
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.027151 7 0.000149
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.070994 2 0.000053
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.071043 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000136 1 0.000103
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.198140 2 0.000028
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.198184 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000150 1 0.000089
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.130997 2 0.000247
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.131232 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.228187 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.025814 2 0.000125
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.026039 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.251469 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:28.322140+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:29.322287+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.882609367s of 10.177284241s, submitted: 470
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:30.322427+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0xa63a7/0xf7000, compress 0x0/0x0/0x0, omap 0x8af5, meta 0x1a2750b), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:31.322573+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372826 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:32.322793+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:01.830326+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.1a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:01.840795+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.1a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 17)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:01.830326+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.1a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:01.840795+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.1a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:33.323045+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:34.323287+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:03.782069+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:03.792520+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0xa8fd3/0xfd000, compress 0x0/0x0/0x0, omap 0x8fed, meta 0x1a27013), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 19)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:03.782069+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:03.792520+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:35.323523+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:36.323637+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:05.789505+0000 osd.1 (osd.1) 20 : cluster [DBG] 7.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:05.800088+0000 osd.1 (osd.1) 21 : cluster [DBG] 7.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 384425 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 21)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:05.789505+0000 osd.1 (osd.1) 20 : cluster [DBG] 7.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:05.800088+0000 osd.1 (osd.1) 21 : cluster [DBG] 7.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:37.323891+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:06.836789+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:06.847390+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 23)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:06.836789+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:06.847390+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:38.324131+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fe0cc000/0x0/0x4ffc00000, data 0xaa453/0x100000, compress 0x0/0x0/0x0, omap 0x9295, meta 0x1a26d6b), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:39.324299+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:08.770083+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.13 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:08.780668+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.13 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.737783 33 0.000147
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.744374 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 19.755618 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 19.755692 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262128830s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 active pruub 121.294456482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] exit Reset 0.000148 1 0.000239
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] exit Start 0.000014 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016062737s of 10.056778908s, submitted: 17
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 25)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:08.770083+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.13 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:08.780668+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.13 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.490505 7 0.000176
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000109 1 0.000048
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002053 1 0.000073
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002232 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.492809 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:40.324512+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:09.803702+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:09.814261+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 27)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:09.803702+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:09.814261+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 19.321074 37 0.000157
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 19.325632 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.334118 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.334164 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678371429s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 active pruub 122.300827026s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] exit Reset 0.000198 1 0.000270
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] exit Start 0.000014 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] enter Started/Stray
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:41.324773+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:10.775481+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.17 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:10.786021+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.17 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401703 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 29)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:10.775481+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.17 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:10.786021+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.17 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:42.325106+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.862351 6 0.000122
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001592 2 0.000153
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.001996 1 0.000059
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.003655 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.866133 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:43.325272+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 57 heartbeat osd_stat(store_statfs(0x4fe0bc000/0x0/0x4ffc00000, data 0xaf97f/0x10c000, compress 0x0/0x0/0x0, omap 0x9c8a, meta 0x1a26376), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 475136 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:44.325469+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:13.732814+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.16 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:13.743384+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.16 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=0 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000145 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=0 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000059
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000725 1 0.000112
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000626 2 0.000147
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 31)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:13.732814+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.16 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:13.743384+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.16 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 58 handle_osd_map epochs [58,59], i have 59, src has [1,59]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.528370 2 0.000125
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.529873 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002653 4 0.000158
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000161 1 0.000075
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007832 2 0.000083
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:45.325737+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:46.325986+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414414 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:47.326166+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 59 heartbeat osd_stat(store_statfs(0x4fe0b6000/0x0/0x4ffc00000, data 0xb25b3/0x112000, compress 0x0/0x0/0x0, omap 0xa208, meta 0x1a25df8), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d(unlocked)] enter Initial
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=0 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000119 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=0 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000032 1 0.000055
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000152 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000206 1 0.000378
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000653 2 0.000089
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000028 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:48.326360+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 60 handle_osd_map epochs [60,61], i have 61, src has [1,61]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.634047 2 0.000290
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.635071 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.003035 3 0.000182
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000099 1 0.000059
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 61 handle_osd_map epochs [61,61], i have 61, src has [1,61]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.117056 3 0.000063
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 1482752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:49.326532+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887034416s of 10.012774467s, submitted: 34
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1466368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:50.327370+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:19.816514+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:19.827233+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 33)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:19.816514+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:19.827233+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1449984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:51.328152+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:20.812529+0000 osd.1 (osd.1) 34 : cluster [DBG] 7.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:20.823358+0000 osd.1 (osd.1) 35 : cluster [DBG] 7.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ae000/0x0/0x4ffc00000, data 0xb66a5/0x11c000, compress 0x0/0x0/0x0, omap 0xab14, meta 0x1a254ec), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 434876 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 35)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:20.812529+0000 osd.1 (osd.1) 34 : cluster [DBG] 7.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:20.823358+0000 osd.1 (osd.1) 35 : cluster [DBG] 7.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 1433600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:52.328364+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1409024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:53.328562+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:22.783748+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:22.794243+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 37)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:22.783748+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:22.794243+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1392640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:54.329392+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0a8000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1392640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:55.329642+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:56.329787+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439595 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:57.329959+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0a8000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0a8000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1351680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:58.330081+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:27.839650+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:27.850211+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 39)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:27.839650+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:27.850211+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 1343488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:59.330371+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.989737511s of 10.015211105s, submitted: 10
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:00.330527+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:29.831633+0000 osd.1 (osd.1) 40 : cluster [DBG] 3.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:29.842227+0000 osd.1 (osd.1) 41 : cluster [DBG] 3.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 41)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:29.831633+0000 osd.1 (osd.1) 40 : cluster [DBG] 3.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:29.842227+0000 osd.1 (osd.1) 41 : cluster [DBG] 3.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:01.330787+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:30.791870+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:30.802400+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1327104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446108 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 43)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:30.791870+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:30.802400+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:02.330969+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1327104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:03.331146+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1327104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:04.331408+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:33.777787+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.0 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:33.788263+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.0 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 45)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:33.777787+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.0 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:33.788263+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.0 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:05.331876+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 1310720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:06.332118+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:35.781237+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.0 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:35.791800+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.0 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1302528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 450930 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 47)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:35.781237+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.0 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:35.791800+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.0 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:07.332487+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1302528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:08.332624+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:09.332861+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:10.333081+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:11.333279+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 450930 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:12.333471+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.065758705s of 13.081768036s, submitted: 8
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:13.333658+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:42.913497+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:42.924056+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 49)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:42.913497+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:42.924056+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:14.333907+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:15.334148+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:44.980888+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:44.991501+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 51)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:44.980888+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:44.991501+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:16.334409+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:45.980643+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:45.991433+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458163 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 53)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:45.980643+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:45.991433+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:17.334672+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:18.334845+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:19.334985+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:49.063615+0000 osd.1 (osd.1) 54 : cluster [DBG] 3.1c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:49.074219+0000 osd.1 (osd.1) 55 : cluster [DBG] 3.1c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1236992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 55)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:49.063615+0000 osd.1 (osd.1) 54 : cluster [DBG] 3.1c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:49.074219+0000 osd.1 (osd.1) 55 : cluster [DBG] 3.1c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:20.335141+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:50.044952+0000 osd.1 (osd.1) 56 : cluster [DBG] 7.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:50.055487+0000 osd.1 (osd.1) 57 : cluster [DBG] 7.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 57)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:50.044952+0000 osd.1 (osd.1) 56 : cluster [DBG] 7.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:50.055487+0000 osd.1 (osd.1) 57 : cluster [DBG] 7.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:21.335358+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462989 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:22.335538+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.114359856s of 10.133749962s, submitted: 10
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:23.335659+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:53.047247+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:53.057785+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 59)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:53.047247+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:53.057785+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:24.335819+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1204224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:25.336005+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:55.117398+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:55.127981+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1196032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 61)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:55.117398+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:55.127981+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:26.336216+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:56.094484+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:56.105071+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1196032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 63)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:56.094484+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:56.105071+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470222 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:27.336432+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1187840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:28.336662+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:58.111166+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:58.121771+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 65)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:58.111166+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:58.121771+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:29.336942+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:59.133096+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.5 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:59.143639+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.5 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 67)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:59.133096+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.5 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:59.143639+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.5 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:30.337196+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:31.337413+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:01.160267+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.f scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:01.170783+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.f scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1138688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 69)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:01.160267+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.f scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:01.170783+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.f scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477455 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:32.337650+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:02.155401+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:02.165952+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1138688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 71)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:02.155401+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.14 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:02.165952+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.14 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:33.337848+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1138688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.985276222s of 11.152510643s, submitted: 14
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:34.338028+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:04.199781+0000 osd.1 (osd.1) 72 : cluster [DBG] 2.1b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:04.210375+0000 osd.1 (osd.1) 73 : cluster [DBG] 2.1b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 73)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:04.199781+0000 osd.1 (osd.1) 72 : cluster [DBG] 2.1b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:04.210375+0000 osd.1 (osd.1) 73 : cluster [DBG] 2.1b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:35.338297+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:36.338491+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:06.196954+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:06.207483+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 75)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:06.196954+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:06.207483+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 484692 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:37.338762+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:38.338910+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:39.339099+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1073152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:40.339255+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1073152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:41.339386+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 484692 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:42.339500+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:12.242382+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.11 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:12.253008+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.11 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 77)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:12.242382+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.11 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:12.253008+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.11 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:43.339698+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:44.339826+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:45.340008+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.024221420s of 12.033978462s, submitted: 6
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:46.340137+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:16.233800+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:16.244349+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 79)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:16.233800+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.10 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:16.244349+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.10 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489518 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:47.340331+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:48.340489+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:18.234010+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.13 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:18.244640+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.13 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 81)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:18.234010+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.13 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:18.244640+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.13 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:49.340690+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:50.341062+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:20.199085+0000 osd.1 (osd.1) 82 : cluster [DBG] 2.17 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:20.209558+0000 osd.1 (osd.1) 83 : cluster [DBG] 2.17 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 83)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:20.199085+0000 osd.1 (osd.1) 82 : cluster [DBG] 2.17 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:20.209558+0000 osd.1 (osd.1) 83 : cluster [DBG] 2.17 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:51.341394+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494344 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:52.341556+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:53.341721+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:54.341871+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:55.342034+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:56.342206+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:26.025480+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:26.036079+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 85)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:26.025480+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:26.036079+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496757 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:57.342361+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:58.342522+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:59.342684+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:00.342807+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:01.342956+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.750560760s of 15.766418457s, submitted: 8
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499170 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:02.343087+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:32.000090+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.15 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:32.010669+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.15 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 87)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:32.000090+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.15 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:32.010669+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.15 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:03.343246+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:04.343372+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:33.993219+0000 osd.1 (osd.1) 88 : cluster [DBG] 5.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:34.003815+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:05.343583+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 89)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:33.993219+0000 osd.1 (osd.1) 88 : cluster [DBG] 5.12 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:34.003815+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.12 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:06.343838+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501583 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:07.343998+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:08.344136+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:09.344264+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:10.344398+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:11.344534+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501583 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:12.344654+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.721431732s of 10.989896774s, submitted: 4
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:13.344796+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:42.990161+0000 osd.1 (osd.1) 90 : cluster [DBG] 5.16 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:43.000805+0000 osd.1 (osd.1) 91 : cluster [DBG] 5.16 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 91)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:42.990161+0000 osd.1 (osd.1) 90 : cluster [DBG] 5.16 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:43.000805+0000 osd.1 (osd.1) 91 : cluster [DBG] 5.16 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:14.345008+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:44.008516+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:44.019092+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 93)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:44.008516+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:44.019092+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:15.345199+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:16.345321+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 508818 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:17.345439+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:47.054098+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:47.064690+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:18.345622+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 4 last_log 97 sent 95 num 4 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:48.049907+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.5 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:48.060480+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.5 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 95)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:47.054098+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:47.064690+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:19.345783+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 4 last_log 99 sent 97 num 4 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.021778+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.032359+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 97)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:48.049907+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.5 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:48.060480+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.5 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 99)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.021778+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.032359+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:20.345985+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.981562+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.3 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.992147+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.3 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 101)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.981562+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.3 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.992147+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.3 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:21.346259+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518462 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:22.346470+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:51.950169+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:51.960692+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 103)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:51.950169+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:51.960692+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:23.346669+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.557197571s of 10.995463371s, submitted: 14
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:24.346854+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:53.985713+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:53.996311+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 105)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:53.985713+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.9 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:53.996311+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.9 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:25.347079+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:54.981080+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.6 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:54.991677+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.6 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 107)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:54.981080+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.6 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:54.991677+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.6 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:26.347274+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 523284 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:27.347455+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:28.347603+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:58.045658+0000 osd.1 (osd.1) 108 : cluster [DBG] 2.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:58.056265+0000 osd.1 (osd.1) 109 : cluster [DBG] 2.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 109)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:58.045658+0000 osd.1 (osd.1) 108 : cluster [DBG] 2.7 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:58.056265+0000 osd.1 (osd.1) 109 : cluster [DBG] 2.7 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:29.347827+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:30.347995+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:31.348226+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 525695 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:32.348363+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:33.348547+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:34.348761+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:35.349000+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:36.349143+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 525695 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:37.349406+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:38.349570+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:39.349732+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:40.349849+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:41.350046+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.051336288s of 18.072429657s, submitted: 6
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528106 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:42.350203+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:12.057937+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:12.068515+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 111)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:12.057937+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:12.068515+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:43.350378+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:44.350582+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:45.350892+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:46.351050+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:16.083647+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.1 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:16.094211+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.1 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 113)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:16.083647+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.1 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:16.094211+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.1 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 530517 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:47.351317+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:48.351443+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:49.351628+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:19.129637+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.1d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:19.140269+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.1d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 115)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:19.129637+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.1d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:19.140269+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.1d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:50.351837+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:51.352031+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:21.086997+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.f scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:21.097617+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.f scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 117)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:21.086997+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.f scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:21.097617+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.f scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.031682968s of 10.049464226s, submitted: 8
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 537754 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:52.352249+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:22.107643+0000 osd.1 (osd.1) 118 : cluster [DBG] 5.1a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:22.118253+0000 osd.1 (osd.1) 119 : cluster [DBG] 5.1a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:53.352454+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 119)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:22.107643+0000 osd.1 (osd.1) 118 : cluster [DBG] 5.1a scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:22.118253+0000 osd.1 (osd.1) 119 : cluster [DBG] 5.1a scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:54.352590+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:55.352791+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:25.168247+0000 osd.1 (osd.1) 120 : cluster [DBG] 5.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:25.178706+0000 osd.1 (osd.1) 121 : cluster [DBG] 5.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 121)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:25.168247+0000 osd.1 (osd.1) 120 : cluster [DBG] 5.19 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:25.178706+0000 osd.1 (osd.1) 121 : cluster [DBG] 5.19 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:56.353026+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542580 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:57.353273+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:27.219385+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.18 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:27.230029+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.18 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 123)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:27.219385+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.18 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:27.230029+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.18 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:58.353487+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:59.353627+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:00.353860+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:30.192706+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:30.217352+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 125)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:30.192706+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.4 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:30.217352+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.4 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:01.354072+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 544991 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:02.354245+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:03.354364+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:04.354539+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.036999702s of 13.050308228s, submitted: 8
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:05.354693+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:35.157930+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:35.172056+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 127)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:35.157930+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.b scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:35.172056+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.b scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:06.354916+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:36.188643+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.e scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:36.202879+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.e scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 129)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:36.188643+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.e scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:36.202879+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.e scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 549813 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:07.355174+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:08.355376+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:38.198302+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.1 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:38.208968+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.1 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 131)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:38.198302+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.1 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:38.208968+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.1 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:09.355599+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:10.355914+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:11.356064+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 552224 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:12.356271+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:13.356411+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:14.356583+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.037598610s of 10.053073883s, submitted: 6
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:15.356861+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:45.211093+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.6 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:45.225192+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.6 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 133)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:45.211093+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.6 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:45.225192+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.6 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:16.357072+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557046 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:17.357244+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:47.131323+0000 osd.1 (osd.1) 134 : cluster [DBG] 6.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:47.141919+0000 osd.1 (osd.1) 135 : cluster [DBG] 6.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 135)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:47.131323+0000 osd.1 (osd.1) 134 : cluster [DBG] 6.2 scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:47.141919+0000 osd.1 (osd.1) 135 : cluster [DBG] 6.2 scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:18.357443+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:19.357647+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:20.357786+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:21.357948+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 559457 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:22.358097+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:52.221486+0000 osd.1 (osd.1) 136 : cluster [DBG] 6.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:52.239174+0000 osd.1 (osd.1) 137 : cluster [DBG] 6.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 137)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:52.221486+0000 osd.1 (osd.1) 136 : cluster [DBG] 6.d scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:52.239174+0000 osd.1 (osd.1) 137 : cluster [DBG] 6.d scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:23.358327+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:53.261470+0000 osd.1 (osd.1) 138 : cluster [DBG] 6.c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:53.275627+0000 osd.1 (osd.1) 139 : cluster [DBG] 6.c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 139)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:53.261470+0000 osd.1 (osd.1) 138 : cluster [DBG] 6.c scrub starts
Dec 01 20:56:00 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:53.275627+0000 osd.1 (osd.1) 139 : cluster [DBG] 6.c scrub ok
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:24.358529+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:25.358730+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:26.358900+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:27.359048+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:28.359162+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:29.359297+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:30.359494+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:31.359635+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:32.359830+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:33.359969+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:34.360096+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:35.360306+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:36.360464+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:37.360601+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:38.360754+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:39.360928+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:40.361041+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:41.361188+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:42.361326+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:43.361607+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:44.361935+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:45.362308+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:46.362473+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:47.362660+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:48.362807+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:49.362960+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:50.363093+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:51.363230+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:52.363377+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:53.363555+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:54.363705+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:55.363900+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:56.364071+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:57.364233+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:58.364354+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:59.364509+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:00.364668+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:01.364804+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:02.364931+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:03.365050+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:04.365160+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:05.365324+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:06.365443+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:07.365559+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:08.365706+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:09.365842+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:10.365986+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:11.366156+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:12.366339+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:13.366543+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:14.366654+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:15.366853+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:16.366988+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:17.367120+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:18.367249+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:19.367370+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:20.367503+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:21.367832+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:22.367981+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:23.368091+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:24.368314+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:25.368529+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:26.368645+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:27.368772+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:28.368926+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:29.369064+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:30.369257+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:31.369410+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:32.369523+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:33.369651+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:34.369799+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:35.370036+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:36.370245+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:37.370377+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:38.370531+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:39.370685+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:40.370839+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:41.371028+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:42.371217+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:43.371351+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:44.371514+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:45.371707+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:46.371881+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:47.372051+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:48.372202+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:49.372353+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:50.372509+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:51.372642+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:52.372764+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:53.372862+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:54.372970+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:55.373238+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:56.373371+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:57.373485+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:58.373604+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:59.373744+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:00.373896+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:01.374039+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:02.374246+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:03.374370+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:04.374549+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:05.374956+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:06.375112+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:07.375240+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:08.375352+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:09.375612+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:10.375760+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:11.375947+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:12.376088+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:13.376223+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:14.376363+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:15.376588+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:16.376756+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 1048576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:17.376903+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:18.377026+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:19.377149+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:20.377275+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-mon[75880]: from='client.14778 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1392318775' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 01 20:56:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2235455189' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 01 20:56:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1915866832' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:21.377413+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:22.377551+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:23.377699+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:24.377919+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:25.378217+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:26.378394+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:27.378573+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:28.378717+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:29.378826+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:30.378940+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:31.379076+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:32.379210+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:33.379382+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:34.379503+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:35.379795+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:36.379981+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:37.380106+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:38.380249+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:39.380388+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:40.380523+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:41.380642+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:42.380785+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:43.380906+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:44.381047+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:45.381263+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:46.381386+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:47.381503+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:48.381624+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:49.381743+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:50.381878+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:51.381967+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:52.382106+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:53.382256+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:54.382407+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:55.382560+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:56.382725+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:57.382909+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:58.383116+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:59.383266+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:00.383398+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:01.383625+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:02.383757+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:03.383893+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:04.384055+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:05.384286+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:06.384454+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:07.384681+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:08.384902+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:09.385101+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:10.385239+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:11.385360+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:12.385499+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:13.385651+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:14.385814+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:15.386011+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:16.386145+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:17.386236+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:18.386360+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:19.386491+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:20.386631+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:21.386768+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:22.386955+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:23.387058+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:24.387197+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:25.387357+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:26.387487+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:27.387618+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:28.387775+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:29.388010+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:30.388141+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:31.388273+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:32.389108+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:33.389261+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:34.389422+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:35.389646+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:36.389832+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:37.390137+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:38.390921+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:39.391449+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:40.391697+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:41.391860+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:42.392065+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:43.392266+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:44.392400+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:45.392581+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:46.392693+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:47.392811+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:48.392945+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:49.393114+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:50.393248+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:51.393431+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:52.393679+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:53.393853+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:54.394048+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:55.394252+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:56.394368+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:57.394504+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:58.394727+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:59.394861+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:00.395044+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:01.395171+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:02.395360+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:03.395497+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:04.395638+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:05.395787+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:06.395914+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:07.396051+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:08.396206+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:09.396347+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:10.396466+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:11.396616+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:12.396753+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:13.396965+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:14.397096+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:15.397327+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:16.397486+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:17.397619+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:18.397759+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:19.397877+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:20.398004+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:21.398233+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:22.398380+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:23.398517+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:24.398669+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:25.399585+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:26.399751+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:27.399924+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:28.400078+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:29.400258+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:30.400421+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:31.400564+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:32.400713+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:33.400857+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:34.401019+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:35.401897+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:36.402027+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:37.402272+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:38.402562+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:39.402739+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:40.402879+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:41.403400+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:42.403579+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:43.403750+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:44.403961+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:45.404153+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:46.404280+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:47.404412+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:48.404535+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:49.404670+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:50.404790+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:51.404914+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:52.405081+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:53.405230+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:54.405367+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:55.405591+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:56.405746+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:57.405906+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:58.406040+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:59.406249+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:00.406457+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:01.406624+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:02.406815+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:03.407005+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:04.407171+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:05.407462+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:06.407629+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:07.407811+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:08.408010+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:09.408225+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:10.408447+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:11.408614+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:12.408817+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:13.409007+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:14.409238+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:15.409462+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:16.409638+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:17.409831+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:18.410006+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:19.410146+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:20.410318+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:21.410464+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:22.410625+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:23.410827+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:24.411038+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:25.411229+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:26.411359+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:27.411504+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:28.411645+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:29.411830+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:30.412015+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:31.412220+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:32.412402+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:33.412552+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:34.412699+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:35.412932+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:36.413104+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:37.413269+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:38.414087+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:39.414330+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:40.414458+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:41.414592+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:42.414948+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:43.415169+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:44.415328+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:45.415567+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:46.415973+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:47.416159+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:48.416307+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:49.416431+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:50.416574+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 16.59 MB, 0.03 MB/s
                                           Interval WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:51.416973+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:52.417225+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:53.417460+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:54.417691+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:55.418149+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:56.418507+0000)
Dec 01 20:56:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v834: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:57.418683+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:58.418914+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:59.419280+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:00.419596+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:01.419821+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:02.419985+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:03.420348+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:04.420551+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:05.420863+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:06.421268+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:07.421579+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:08.421838+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:09.422032+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:10.422511+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:11.422641+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:12.422801+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:13.422968+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:14.423107+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:15.423239+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:16.423515+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:17.423881+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:18.424571+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:19.425147+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:20.425406+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:21.425770+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:22.426375+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:23.426734+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:24.427024+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:25.427700+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:26.428022+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:27.428521+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:28.428664+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:29.428952+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:30.429444+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:31.429761+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:32.429947+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:33.430268+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:34.430645+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:35.430958+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:36.431243+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:37.431444+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:38.431680+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:39.431983+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:40.432253+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:41.432513+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:42.432732+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:43.432901+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:44.433044+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:45.433243+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:46.433396+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:47.433535+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:48.433656+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:49.433787+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:50.434000+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:51.434168+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:52.434304+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:53.434433+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:54.434572+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:55.434801+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:56.434955+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:57.435096+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:58.435280+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:59.435449+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:00.435592+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:01.435779+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:02.435981+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:03.436131+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:04.436268+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:05.436415+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:06.436607+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:07.436784+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:08.436949+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:09.437125+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:10.437250+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:11.437367+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:12.437546+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:13.437715+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:14.437841+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:15.438070+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:16.438222+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:17.438424+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:18.438565+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:19.438703+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:20.438840+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:21.439038+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:22.439211+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:23.439362+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:24.439515+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:25.439704+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:26.439868+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:27.440044+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:28.440242+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:29.440369+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:30.440542+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:31.440682+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:32.440799+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:33.440936+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:34.441099+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:35.441259+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:36.441429+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:37.441578+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:38.441953+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:39.442137+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:40.442349+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:41.442510+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:42.444333+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:43.444542+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:44.444671+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:45.445051+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:46.445270+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:47.445420+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:48.445550+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:49.445673+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:50.445818+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:51.445984+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:52.446162+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:53.446249+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:54.446364+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:55.446522+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:56.446662+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:57.446883+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:58.447104+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:59.447253+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:00.447370+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:01.447503+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:02.447657+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:03.447755+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:04.447952+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:05.448541+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:06.448781+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:07.448922+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:08.449090+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:09.449289+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:10.449495+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:11.449671+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:12.449834+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:13.449958+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:14.450086+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:15.450256+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:16.450379+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:17.450537+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:18.450678+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:19.450854+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:20.450991+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:21.451215+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:22.451375+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:23.451513+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:24.451640+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:25.451886+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:26.452036+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:27.452205+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:28.452324+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:29.452471+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:30.452609+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:31.452722+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:32.452854+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:33.453030+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:34.453141+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:35.453390+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:36.453493+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:37.453599+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:38.453717+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:39.453928+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:40.454064+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:41.454221+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:42.454396+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:43.454506+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:44.454640+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:45.454793+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:46.454925+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:47.455111+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:48.455242+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:49.455440+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:50.455634+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:51.455759+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:52.455889+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:53.456047+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:54.456235+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:55.456418+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:56.456545+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:57.456692+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:58.456910+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:59.457051+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:00.457211+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:01.457366+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:02.457494+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:03.457634+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:04.457757+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:05.457990+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:06.458218+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:07.458344+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:08.458482+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:09.458614+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:10.458773+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:11.458951+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:12.459118+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:13.459279+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:14.459437+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:15.459641+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:16.459788+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:17.460099+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:18.460241+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:19.460409+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:20.460527+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:21.460679+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:22.460801+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:23.461014+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:24.461155+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:25.461313+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:26.461477+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:27.461625+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:28.461756+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:29.461936+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:30.462078+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:31.462263+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:32.462455+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:33.462603+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:34.462771+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:35.462963+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:36.463136+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:37.463258+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:38.463415+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:39.463570+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:40.463732+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:41.464104+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:42.464259+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:43.464386+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:44.464509+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:45.464678+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:46.464820+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:47.464936+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:48.465337+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:49.465534+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:50.465649+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:51.465951+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:52.466115+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:53.466316+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:54.466453+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:55.466655+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:56.466844+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:57.467384+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:58.467804+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:59.468151+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:00.468474+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:01.468801+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:02.468986+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:03.469233+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:04.469392+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:05.469592+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:06.469718+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:07.469894+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:08.470059+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:09.470271+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:10.470393+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:11.470617+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:12.470826+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:13.471001+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:14.471245+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:15.471520+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:16.471739+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:17.471935+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:18.472147+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:19.472264+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:20.472432+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:21.472603+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:22.472841+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:23.472965+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:24.473149+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:25.473355+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:26.473527+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:27.473681+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:28.474300+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:29.474437+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:30.474699+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:31.474807+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:32.474924+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:33.475150+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:34.475351+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:35.475522+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:36.475629+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:37.475778+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:38.475944+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:39.476079+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:40.476278+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:41.476426+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:42.476538+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:43.476676+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:44.476837+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:45.477028+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:46.477254+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:47.477463+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:48.477696+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:49.477844+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:50.477968+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:51.478144+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:52.478235+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:53.478411+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:54.478576+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:55.478790+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:56.478923+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: mgrc ms_handle_reset ms_handle_reset con 0x563147b44000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 20:56:00 compute-0 ceph-osd[87692]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: get_auth_request con 0x563148678400 auth_method 0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: mgrc handle_mgr_configure stats_period=5
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:57.479083+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 ms_handle_reset con 0x563146fb8400 session 0x56314785cc40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 ms_handle_reset con 0x563148147c00 session 0x5631480d88c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:58.479313+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 385024 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:59.479479+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 385024 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:00.479609+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:01.479806+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:02.480019+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:03.480197+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:04.480392+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:05.480604+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:06.480762+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:07.480896+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:08.481006+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:09.481168+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:10.481366+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:11.481545+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:12.481715+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:13.481937+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:14.482124+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:15.482565+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:16.482729+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:17.482959+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:18.483258+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:19.483415+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:20.483723+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:21.483935+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:22.513903+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:23.514052+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:24.514394+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:25.515308+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:26.515535+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:27.515749+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:28.515952+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:29.516224+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:30.516422+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:31.516595+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:32.516830+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:33.517025+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:34.517232+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:35.517514+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:36.517778+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:37.517989+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:38.518159+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:39.518346+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:40.518515+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:41.518672+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:42.518838+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:43.519011+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:44.519208+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:45.519396+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:46.519525+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:47.519690+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:48.519850+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:49.520001+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:50.520135+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:51.520257+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:52.520440+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:53.520644+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:54.520786+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:55.521207+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:56.521384+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:57.521542+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:58.521701+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:59.521878+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:00.522112+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:01.522251+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:02.522369+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:03.522512+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 nova_compute[244568]: 2025-12-01 20:56:00.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:56:00 compute-0 nova_compute[244568]: 2025-12-01 20:56:00.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:56:00 compute-0 nova_compute[244568]: 2025-12-01 20:56:00.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:04.522652+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:05.522941+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:06.523231+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:07.523506+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:08.523671+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:09.523841+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:10.524063+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:11.524306+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:12.524481+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:13.524642+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:14.524823+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:15.525038+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:16.525347+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:17.525601+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:18.525756+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:19.525989+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:20.526266+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:21.526501+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:22.526741+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:23.526924+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:24.527079+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:25.527308+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:26.527510+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:27.527710+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:28.527893+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:29.528080+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:30.528235+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:31.528391+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:32.528599+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:33.528753+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:34.528925+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:35.529152+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:36.529340+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:37.529542+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:38.529744+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:39.529918+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:40.530150+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:41.530377+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:42.530582+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:43.530943+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:44.531258+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:45.531543+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:46.531731+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:47.531922+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:48.532069+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:49.532253+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:50.532456+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:51.532748+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:52.532945+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:53.533089+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:54.533277+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:55.533621+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:56.533817+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:57.534034+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:58.534272+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:59.534428+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:00.534676+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:01.534947+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:02.535248+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:03.535484+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:04.535709+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:05.536237+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:06.536513+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:07.536729+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:08.536912+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:09.537287+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:10.537535+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:11.539429+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:12.540244+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:13.540786+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:14.546807+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:15.551716+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:16.551945+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:17.552537+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:18.552718+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:19.553585+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:20.553786+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:21.554086+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:22.554441+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:23.554606+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:24.554898+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:25.555117+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:26.555405+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:27.555694+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:28.555848+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:29.556117+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:30.556373+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:31.556692+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:32.556934+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:33.557120+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:34.557280+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:35.557444+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:36.557555+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:37.557698+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:38.557842+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:39.558026+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:40.558209+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:41.558380+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:42.558494+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:43.558637+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:44.558810+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:45.558997+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:46.559126+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:47.559223+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:48.559440+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:49.559586+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:50.559708+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:51.562321+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:52.562445+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:53.562581+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:54.562703+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:55.562849+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:56.562994+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:57.563155+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:58.563291+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:59.563415+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:00.563520+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:01.563642+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:02.563782+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:03.563933+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:04.564094+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:05.564285+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:06.564436+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:07.564609+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:08.564747+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:09.564875+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:10.565046+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:11.565207+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:12.565381+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:13.565526+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:14.565688+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:15.565845+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:16.565953+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:17.566108+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:18.566298+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:19.566521+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:20.566776+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:21.566951+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:22.567098+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:23.567223+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:24.567398+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:25.567616+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:26.567758+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:27.567927+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:28.568046+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:29.568235+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:30.568397+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:31.568535+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:32.568695+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:33.568850+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:34.569001+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:35.569150+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:36.569280+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:37.569445+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:38.569598+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:39.569733+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:40.569880+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:41.570024+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:42.570144+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:43.570356+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:44.570660+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:45.571059+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:46.571256+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:47.571423+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:48.571610+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:49.571845+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:50.572020+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:51.572121+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:52.572302+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:53.572451+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:54.572617+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:55.572810+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:56.572932+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:57.573095+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:58.573215+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:59.573370+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:00.573500+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:01.573624+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:02.573744+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:03.573865+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:04.573983+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:05.574097+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:06.574223+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:07.574339+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:08.574469+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:09.574558+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:10.574682+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:11.575019+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:12.575142+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:13.575334+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:14.575488+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:15.575689+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:16.575805+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:17.575983+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:18.576113+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:19.576219+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:20.576463+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:21.576616+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:22.576797+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:23.577024+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread fragmentation_score=0.000121 took=0.000015s
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:24.577311+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:25.577573+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:26.577715+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:27.577948+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:28.578079+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:29.578205+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:30.578362+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:31.578530+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:32.578694+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:33.578834+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:34.578976+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:35.579227+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:36.579404+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:37.579505+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:38.579622+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:39.579770+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:40.579955+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:41.580096+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:42.580264+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:43.580437+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:44.580642+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:45.580881+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:46.581057+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 nova_compute[244568]: 2025-12-01 20:56:00.977 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:56:00 compute-0 nova_compute[244568]: 2025-12-01 20:56:00.978 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:47.581289+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:48.581457+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:49.581642+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:50.581846+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:51.581999+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:52.582154+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:53.582378+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:54.582493+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:55.582724+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:56.582930+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:57.583091+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:58.583277+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:59.583376+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:00.583449+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:01.583599+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:02.583718+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:03.583840+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:04.584028+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:05.584293+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 950.738098145s of 950.755859375s, submitted: 8
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 237568 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:06.584475+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xba710/0x126000, compress 0x0/0x0/0x0, omap 0xb2ac, meta 0x1a24d54), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 16924672 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:07.584594+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 16924672 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:08.584707+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 16924672 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:09.584839+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 67 ms_handle_reset con 0x56314828a400 session 0x56314785d880
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 16916480 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:10.585003+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 616850 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 16834560 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fd899000/0x0/0x4ffc00000, data 0x8bd335/0x92d000, compress 0x0/0x0/0x0, omap 0xb7d0, meta 0x1a24830), peers [0,2] op hist [0,0,0,0,1])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:11.585166+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 25051136 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:12.585371+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 ms_handle_reset con 0x5631474f9800 session 0x5631499e2a80
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:13.585559+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09d000/0x0/0x4ffc00000, data 0x10bd368/0x112f000, compress 0x0/0x0/0x0, omap 0xb7d0, meta 0x1a24830), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:14.585705+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:15.585847+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:16.585968+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:17.586122+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:18.586293+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:19.586442+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:20.586610+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:21.586766+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:22.586944+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:23.587248+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:24.587445+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:25.587689+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:26.587863+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:27.588050+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:28.588238+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:29.588435+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:30.588595+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:31.588787+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:32.589005+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:33.589227+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:34.589428+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:35.589625+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:36.589794+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:37.589950+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:38.590108+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:39.590284+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:40.590456+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:41.590615+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:42.590779+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:43.590952+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:44.591135+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:45.591369+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.565486908s of 40.443881989s, submitted: 44
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:46.591509+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 24780800 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 69 ms_handle_reset con 0x563149b10800 session 0x563149a36e00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:47.591675+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 24772608 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:48.591799+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 24772608 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fc896000/0x0/0x4ffc00000, data 0x18bff5d/0x1936000, compress 0x0/0x0/0x0, omap 0xc29c, meta 0x1a23d64), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:49.592016+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 32800768 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:50.592286+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 32587776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x563149b2d000 session 0x563148cd6a80
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x563149af2800 session 0x56314941ca80
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969898 data_alloc: 218103808 data_used: 934
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:51.592459+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 70 heartbeat osd_stat(store_statfs(0x4f988e000/0x0/0x4ffc00000, data 0x48c195d/0x493c000, compress 0x0/0x0/0x0, omap 0xc67d, meta 0x1a23983), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 31776768 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x5631474f9800 session 0x56314751c700
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x563149b10800 session 0x5631485e2380
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 71 ms_handle_reset con 0x563148b5a000 session 0x563145e2efc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149f17c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1fc00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:52.592590+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 30236672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x563149d1fc00 session 0x5631476a1180
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x563149f17c00 session 0x563148cd6fc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x5631474f9800 session 0x563149550540
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x563148b5a000 session 0x563149a1da40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:53.592755+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 30343168 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1ec00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fd08a000/0x0/0x4ffc00000, data 0x10c459f/0x1142000, compress 0x0/0x0/0x0, omap 0xd785, meta 0x1a2287b), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:54.592900+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 30343168 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 73 ms_handle_reset con 0x563149d1ec00 session 0x5631476a1a40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:55.593067+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 30212096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698182 data_alloc: 218103808 data_used: 4995
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 74 ms_handle_reset con 0x563149d1f000 session 0x563148cd7340
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.431722641s of 10.013735771s, submitted: 196
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:56.593274+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 29966336 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fd084000/0x0/0x4ffc00000, data 0x10c6d9b/0x1144000, compress 0x0/0x0/0x0, omap 0xdfa9, meta 0x1a22057), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 75 ms_handle_reset con 0x563149d1f400 session 0x563149a36e00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fd081000/0x0/0x4ffc00000, data 0x10c87a7/0x1149000, compress 0x0/0x0/0x0, omap 0xe240, meta 0x1a21dc0), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:57.593471+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 30097408 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:58.593640+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 30081024 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 76 ms_handle_reset con 0x5631474f9800 session 0x563149a36380
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:59.593824+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 30031872 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 77 ms_handle_reset con 0x563148b5a000 session 0x5631478cc1c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:00.594073+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 29908992 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713014 data_alloc: 218103808 data_used: 4995
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:01.594237+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 29908992 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:02.594468+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fd07a000/0x0/0x4ffc00000, data 0x10cb3c1/0x114f000, compress 0x0/0x0/0x0, omap 0xe6b7, meta 0x1a21949), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 29908992 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:03.594622+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1ec00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 78 ms_handle_reset con 0x563149d1ec00 session 0x56314941d500
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 29720576 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:04.594796+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1fc00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 29523968 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fd078000/0x0/0x4ffc00000, data 0x10cc88d/0x1152000, compress 0x0/0x0/0x0, omap 0xe9a1, meta 0x1a2165f), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1fc00 session 0x5631485e2fc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1f800 session 0x563149a36fc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1f000 session 0x563149a37500
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1f800 session 0x563149a36540
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:05.595056+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 27254784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 732442 data_alloc: 218103808 data_used: 4995
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 81 ms_handle_reset con 0x563149b10800 session 0x563147963dc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:06.595238+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149f17c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 27181056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.240373611s of 10.403330803s, submitted: 106
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 82 ms_handle_reset con 0x563149f17c00 session 0x563148152000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:07.595430+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 27164672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fbeca000/0x0/0x4ffc00000, data 0x10d0c86/0x1160000, compress 0x0/0x0/0x0, omap 0xfb85, meta 0x2bc047b), peers [0,2] op hist [0,0,0,0,1])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 83 ms_handle_reset con 0x563149b2d000 session 0x56314785c8c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:08.595577+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 26984448 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:09.595727+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 84 ms_handle_reset con 0x563149b10800 session 0x5631499e3dc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 26951680 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:10.595917+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 26886144 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758587 data_alloc: 218103808 data_used: 4995
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 85 ms_handle_reset con 0x563149d1f000 session 0x563149a09340
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:11.596133+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 26869760 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbebb000/0x0/0x4ffc00000, data 0x10d6066/0x116f000, compress 0x0/0x0/0x0, omap 0x10872, meta 0x2bbf78e), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:12.596294+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 26869760 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:13.596428+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbeb6000/0x0/0x4ffc00000, data 0x10d764f/0x1172000, compress 0x0/0x0/0x0, omap 0x10b7c, meta 0x2bbf484), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 26804224 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:14.596754+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 26730496 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 87 ms_handle_reset con 0x563149d1f800 session 0x563149a1ce00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:15.597025+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149f17c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 25788416 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768219 data_alloc: 218103808 data_used: 5011
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 88 ms_handle_reset con 0x56314828a400 session 0x563148cd6e00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 88 ms_handle_reset con 0x563149f17c00 session 0x563148cd7880
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10d9c1a/0x1176000, compress 0x0/0x0/0x0, omap 0x11751, meta 0x2bbe8af), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:16.597202+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 25706496 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.924459457s of 10.438578606s, submitted: 180
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 89 ms_handle_reset con 0x56314828a400 session 0x5631499e2540
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:17.597401+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 25690112 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fbeac000/0x0/0x4ffc00000, data 0x10dbeab/0x1178000, compress 0x0/0x0/0x0, omap 0x11fe9, meta 0x2bbe017), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:18.597673+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 25681920 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:19.597863+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 90 ms_handle_reset con 0x563149d1f800 session 0x563149475c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 25870336 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:20.597999+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 25567232 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 91 ms_handle_reset con 0x563149cf0400 session 0x5631481528c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775706 data_alloc: 218103808 data_used: 17706
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:21.598329+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 91 ms_handle_reset con 0x563149af2800 session 0x56314751cc40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 25395200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fbeaf000/0x0/0x4ffc00000, data 0x10dec89/0x117d000, compress 0x0/0x0/0x0, omap 0x129ba, meta 0x2bbd646), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:22.598515+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 25247744 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 92 ms_handle_reset con 0x563149cf0800 session 0x563149a368c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 92 ms_handle_reset con 0x563149cf0c00 session 0x563148152c40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:23.598669+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 25206784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:24.598796+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 93 ms_handle_reset con 0x56314828a400 session 0x563149a1d880
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 25206784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 93 ms_handle_reset con 0x563149af2800 session 0x563148153c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:25.598983+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 25182208 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 94 ms_handle_reset con 0x563149cf0400 session 0x5631485c56c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782580 data_alloc: 218103808 data_used: 18638
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10e2db0/0x1184000, compress 0x0/0x0/0x0, omap 0x1365b, meta 0x2bbc9a5), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:26.599172+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 25182208 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.109507561s of 10.018264771s, submitted: 188
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 95 ms_handle_reset con 0x563149d1f800 session 0x5631476a16c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:27.599395+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 25149440 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 96 ms_handle_reset con 0x56314828a400 session 0x563149a36c40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:28.599602+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 25141248 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:29.599735+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 25116672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fbe9a000/0x0/0x4ffc00000, data 0x10e6eec/0x118c000, compress 0x0/0x0/0x0, omap 0x14125, meta 0x2bbbedb), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:30.599849+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 25133056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791338 data_alloc: 218103808 data_used: 19251
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149af2800 session 0x5631485c5a40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0400 session 0x56314751c8c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0c00 session 0x563148cd7a40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:31.599991+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf1000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf1000 session 0x5631485c4540
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149af2800 session 0x5631478ccc40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x56314828a400 session 0x5631478cda40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 24969216 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0400 session 0x5631499e2c40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0c00 session 0x5631485c5dc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x5631474f9800 session 0x563147987a40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x56314828a400 session 0x563149a376c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:32.600113+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 24936448 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:33.600264+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 98 ms_handle_reset con 0x563149af2800 session 0x5631499e2380
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 24616960 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:34.600368+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:35.600500+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802682 data_alloc: 218103808 data_used: 21299
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fbe75000/0x0/0x4ffc00000, data 0x110c455/0x11b5000, compress 0x0/0x0/0x0, omap 0x14912, meta 0x2bbb6ee), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:36.600641+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:37.600835+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.560202599s of 10.949222565s, submitted: 79
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 99 ms_handle_reset con 0x563148b5a000 session 0x563149a36a80
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:38.600949+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 24625152 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x5631478e1800 session 0x563148662a80
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148613800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x563148613800 session 0x563147986380
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x563149b2d000 session 0x5631486628c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:39.601061+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148613800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x563148613800 session 0x5631485c5880
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 24264704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fbe72000/0x0/0x4ffc00000, data 0x110d905/0x11b8000, compress 0x0/0x0/0x0, omap 0x14ba6, meta 0x2bbb45a), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 101 ms_handle_reset con 0x5631478e1800 session 0x5631499e3340
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:40.601246+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 24231936 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817791 data_alloc: 218103808 data_used: 25461
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:41.601416+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x56314828a400 session 0x563147987880
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 24223744 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:42.601588+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 24207360 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x563148b5a000 session 0x5631476a0700
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x5631478e1800 session 0x5631478cc000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x56314828a400 session 0x5631480d9500
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:43.601735+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 24190976 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148613800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:44.601841+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x563148613800 session 0x5631476a1500
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 24182784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:45.602021+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 102 heartbeat osd_stat(store_statfs(0x4fbe69000/0x0/0x4ffc00000, data 0x1111eec/0x11c3000, compress 0x0/0x0/0x0, omap 0x15f55, meta 0x2bba0ab), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 23126016 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 103 ms_handle_reset con 0x563149b2d000 session 0x563148cd6c40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819548 data_alloc: 218103808 data_used: 25461
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:46.602160+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 103 ms_handle_reset con 0x563149cf0400 session 0x5631479876c0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 103 ms_handle_reset con 0x563149cf0c00 session 0x56314941c540
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:47.602309+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.259161949s of 10.002922058s, submitted: 116
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:48.602437+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x11145e5/0x11c7000, compress 0x0/0x0/0x0, omap 0x16900, meta 0x2bb9700), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 105 ms_handle_reset con 0x5631478e1800 session 0x563149550c40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:49.602568+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 22962176 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fbe83000/0x0/0x4ffc00000, data 0x10f1ba1/0x11a4000, compress 0x0/0x0/0x0, omap 0x1703d, meta 0x2bb8fc3), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:50.602685+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 22962176 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822506 data_alloc: 218103808 data_used: 19251
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:51.602847+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 22953984 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:52.603035+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 22953984 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:53.603223+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 22953984 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x563149b10800 session 0x56314751c540
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x563149d1f000 session 0x563148663dc0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:54.603371+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x56314828a400 session 0x563149a36c40
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:55.603544+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x5631478e1800 session 0x563149551c00
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823483 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10f306d/0x11a7000, compress 0x0/0x0/0x0, omap 0x175cd, meta 0x2bb8a33), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:56.603702+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23068672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 107 ms_handle_reset con 0x563149d1f000 session 0x5631485c4540
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:57.603842+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23068672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:58.604000+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.773540497s of 10.412032127s, submitted: 169
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:59.604162+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:00.604361+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829180 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:01.604546+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10f5b6a/0x11ad000, compress 0x0/0x0/0x0, omap 0x17e67, meta 0x2bb8199), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:02.604749+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:03.604941+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:04.605113+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:05.605267+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:06.605391+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:07.605493+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:08.605640+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:09.605786+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:10.605932+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:00 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:00 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:11.606108+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:12.606286+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:13.606436+0000)
Dec 01 20:56:00 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:00 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:14.606566+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:15.606806+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:16.607007+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:17.607310+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:18.607459+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:19.607624+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:20.607780+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:21.607951+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:22.608098+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:23.608313+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:24.608530+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:25.608758+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:26.608961+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:27.609143+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:28.609257+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:29.609407+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:30.609514+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:31.609794+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:32.610071+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:33.610335+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:34.610502+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:35.610796+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:36.610997+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:37.611253+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:38.611453+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:39.611612+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:40.611865+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:41.612081+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:42.612239+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:43.612406+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:44.612591+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:45.612800+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:46.612990+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:47.613166+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:48.613401+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:49.614844+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:50.615009+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:51.615122+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:52.615280+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:53.615435+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:54.615588+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:55.615741+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:56.615899+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:57.616093+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:58.616279+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:59.616430+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:00.616600+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:01.616774+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:02.617006+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:03.617216+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:04.617347+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:05.617515+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:06.617681+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:07.617935+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:08.618134+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:09.618391+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:10.618561+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:11.618750+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:12.618958+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:13.619243+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:14.619393+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:15.620668+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:16.622078+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:17.622749+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:18.622962+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:19.623222+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:20.623486+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:21.623711+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:22.624026+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:23.624582+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:24.624805+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:25.643270+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:26.643453+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:01 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:01 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 20:56:01 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 22921216 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:27.643671+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'config diff' '{prefix=config diff}'
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'config show' '{prefix=config show}'
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 22732800 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:28.643904+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 22519808 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:29.644147+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 22519808 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 20:56:01 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:30.650316+0000)
Dec 01 20:56:01 compute-0 ceph-osd[87692]: do_command 'log dump' '{prefix=log dump}'
Dec 01 20:56:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 01 20:56:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211282006' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 01 20:56:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389392377' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 01 20:56:01 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:56:01 compute-0 rsyslogd[1006]: imjournal from <np0005541545:ceph-osd>: begin to drop messages due to rate-limiting
Dec 01 20:56:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 01 20:56:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3538906243' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 01 20:56:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 01 20:56:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241509865' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 01 20:56:01 compute-0 ceph-mon[75880]: pgmap v834: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1211282006' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1389392377' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 01 20:56:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3538906243' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 01 20:56:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/241509865' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 01 20:56:01 compute-0 nova_compute[244568]: 2025-12-01 20:56:01.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:56:01 compute-0 nova_compute[244568]: 2025-12-01 20:56:01.980 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:56:01 compute-0 nova_compute[244568]: 2025-12-01 20:56:01.981 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:56:01 compute-0 nova_compute[244568]: 2025-12-01 20:56:01.982 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:56:01 compute-0 nova_compute[244568]: 2025-12-01 20:56:01.982 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:56:01 compute-0 nova_compute[244568]: 2025-12-01 20:56:01.982 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:56:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 01 20:56:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368169336' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 01 20:56:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692939383' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:56:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562056305' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:56:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562056305' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:56:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793201264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.552 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:56:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 01 20:56:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082211913' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 01 20:56:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290500742' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.739 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.741 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5036MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.742 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.742 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.858 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.858 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:56:02 compute-0 nova_compute[244568]: 2025-12-01 20:56:02.875 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:56:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1368169336' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1692939383' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/562056305' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/562056305' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1793201264' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3082211913' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 01 20:56:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2290500742' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 01 20:56:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 01 20:56:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531119880' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 01 20:56:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 01 20:56:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255334231' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 01 20:56:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:56:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:56:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:56:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:56:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:56:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:56:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:56:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739592117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:56:03 compute-0 nova_compute[244568]: 2025-12-01 20:56:03.435 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:56:03 compute-0 nova_compute[244568]: 2025-12-01 20:56:03.441 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:56:03 compute-0 nova_compute[244568]: 2025-12-01 20:56:03.492 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:56:03 compute-0 nova_compute[244568]: 2025-12-01 20:56:03.494 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:56:03 compute-0 nova_compute[244568]: 2025-12-01 20:56:03.494 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:56:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 01 20:56:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3813375204' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 01 20:56:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 01 20:56:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2193578366' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 20:56:04 compute-0 ceph-mon[75880]: pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2531119880' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 01 20:56:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3255334231' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 01 20:56:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2739592117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:56:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3813375204' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 01 20:56:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2193578366' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 20:56:04 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14819 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 01 20:56:04 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558968328' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 01 20:56:04 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14822 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v836: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:04 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14824 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:05 compute-0 ceph-mon[75880]: from='client.14819 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:05 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2558968328' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 01 20:56:05 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:05 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14828 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:05 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 peering pruub 95.665847778s@ mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 peering pruub 95.665847778s@ mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 peering pruub 95.665847778s@ mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 peering pruub 95.665847778s@ mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 peering pruub 95.665847778s@ mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.002822 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.002760 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.001865 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.001770 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.003203 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.001733 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.001644 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.001608 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.003036 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.002896 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.003098 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.003173 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.004346 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.003269 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Initial 0.003249 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 39 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=0 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 2588672 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 330026 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 39 heartbeat osd_stat(store_statfs(0x4fe15e000/0x0/0x4ffc00000, data 0x27163/0x6a000, compress 0x0/0x0/0x0, omap 0x44f1, meta 0x1a2bb0f), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 39 handle_osd_map epochs [40,40], i have 39, src has [1,40]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 39 handle_osd_map epochs [40,40], i have 40, src has [1,40]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.296968 4 0.000041
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.297922 4 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.297220 4 0.000093
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000111 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.297497 4 0.000033
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000052 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.297859 4 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298617 4 0.000029
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298246 4 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.297618 4 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000013 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298855 4 0.000036
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298661 4 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298634 4 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000015 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298766 4 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298555 4 0.000051
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298989 4 0.000029
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Reset 0.298767 4 0.000019
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 40 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 40 handle_osd_map epochs [40,41], i have 40, src has [1,41]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053173 3 0.000231
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053353 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053261 3 0.000125
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.052656 3 0.000228
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.052707 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053409 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053193 3 0.000252
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053303 3 0.000128
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053399 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053533 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053256 3 0.000069
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053319 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053225 3 0.000121
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053305 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053362 3 0.000066
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053409 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053376 3 0.000079
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053424 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053369 3 0.000063
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053413 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 41 handle_osd_map epochs [39,41], i have 41, src has [1,41]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 peering pruub 95.665847778s@ mbc={}] exit Started/Primary/Peering/WaitUpThru 0.353178 6 0.000188
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 peering pruub 95.665847778s@ mbc={}] exit Started/Primary/Peering 0.353297 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053331 3 0.000099
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=10.446947098s) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 unknown pruub 95.665847778s@ mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053413 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053349 3 0.000082
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053402 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053306 3 0.000107
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053385 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053425 3 0.000140
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.053279 3 0.000159
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053483 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.053342 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=23/24 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004378 3 0.000116
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 5)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:40.950676+0000 osd.0 (osd.0) 4 : cluster [DBG] 4.1f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:40.961196+0000 osd.0 (osd.0) 5 : cluster [DBG] 4.1f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005799 3 0.000055
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005718 3 0.000046
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007044 3 0.000091
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007160 3 0.000063
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007036 3 0.000036
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007084 3 0.000067
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007245 3 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=23/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007348 3 0.000052
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=23/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=23/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.0( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=23/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 33'38 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007445 3 0.000068
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007811 3 0.000486
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007541 3 0.000089
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007624 3 0.000063
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007940 3 0.000433
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007772 3 0.000074
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007865 3 0.000231
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 41 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/23 les/c/f=41/24/0 sis=39) [0] r=0 lpr=39 pi=[23,39)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:12.747906+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:41.924143+0000 osd.0 (osd.0) 6 : cluster [DBG] 4.8 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:41.947882+0000 osd.0 (osd.0) 7 : cluster [DBG] 4.8 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60407808 unmapped: 2416640 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:13.748154+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 2326528 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 7)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:41.924143+0000 osd.0 (osd.0) 6 : cluster [DBG] 4.8 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:41.947882+0000 osd.0 (osd.0) 7 : cluster [DBG] 4.8 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:14.748302+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.804915428s of 10.554049492s, submitted: 173
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 2465792 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:15.748465+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:44.867941+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.1c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:44.878277+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.1c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 2457600 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 9)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:44.867941+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.1c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:44.878277+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.1c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:16.748668+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:45.841225+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.1d scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:45.851660+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.1d scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 2449408 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 339676 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 11)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:45.841225+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.1d scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:45.851660+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.1d scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:17.748885+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 2441216 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 41 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x29bdd/0x70000, compress 0x0/0x0/0x0, omap 0x4569, meta 0x1a2ba97), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:18.748962+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 2441216 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 41 handle_osd_map epochs [41,42], i have 41, src has [1,42]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.417747 11 0.000121
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422816 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422939 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.417824 11 0.000080
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.260173 1 0.000037
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422895 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268052 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.321626 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423129 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.417665 11 0.000154
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.422997 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422796 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.321662 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423164 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423090 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423113 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739074707s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581520081s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582242966s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424697876s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582140923s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424591064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582109451s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424568176s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739041328s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581520081s@ mbc={}] exit Reset 0.000064 1 0.000083
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739041328s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581520081s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582103729s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424591064s@ mbc={}] exit Reset 0.000071 1 0.000143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739041328s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581520081s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582201958s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424697876s@ mbc={}] exit Reset 0.000061 1 0.000085
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582103729s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424591064s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739041328s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581520081s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582069397s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424568176s@ mbc={}] exit Reset 0.000075 1 0.000104
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582201958s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424697876s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582103729s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424591064s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739041328s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581520081s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582069397s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424568176s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582201958s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424697876s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739041328s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581520081s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582069397s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424568176s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582103729s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424591064s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582201958s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424697876s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582069397s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424568176s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582201958s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424697876s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582103729s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424591064s@ mbc={}] exit Start 0.000014 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582201958s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424697876s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582069397s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424568176s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582103729s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424591064s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582069397s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424568176s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.260179 1 0.000043
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268303 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.321754 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.417883 11 0.000093
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422816 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422906 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.417886 11 0.000109
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.422933 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422840 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422933 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.321816 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.422964 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.261196 1 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268300 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582024574s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424728394s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.321632 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.321653 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582006454s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424728394s@ mbc={}] exit Reset 0.000037 1 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582006454s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424728394s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581993103s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424720764s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582006454s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424728394s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582006454s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424728394s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739621162s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.582366943s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582006454s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424728394s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581974983s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424720764s@ mbc={}] exit Reset 0.000043 1 0.000067
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.582006454s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424728394s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738656044s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581413269s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581974983s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424720764s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581974983s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424720764s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581974983s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424720764s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581974983s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424720764s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581974983s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424720764s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738636971s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581413269s@ mbc={}] exit Reset 0.000062 1 0.000071
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.417970 11 0.000065
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739597321s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582366943s@ mbc={}] exit Reset 0.000096 1 0.000134
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738636971s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581413269s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422841 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738636971s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581413269s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422921 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739597321s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582366943s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738636971s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581413269s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739597321s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582366943s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738636971s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581413269s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.422953 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739597321s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582366943s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738636971s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581413269s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.417966 11 0.000063
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422834 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739597321s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582366943s@ mbc={}] exit Start 0.000013 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422929 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739597321s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582366943s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581933022s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424774170s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.422958 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581916809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424774170s@ mbc={}] exit Reset 0.000033 1 0.000057
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581916809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424774170s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581902504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424766541s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581916809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424774170s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581916809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424774170s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581916809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424774170s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581916809s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424774170s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581887245s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] exit Reset 0.000031 1 0.000052
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.261246 1 0.000035
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581887245s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581887245s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268444 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.321765 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.321801 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738620758s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581542969s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.418017 11 0.000099
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738601685s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581542969s@ mbc={}] exit Reset 0.000035 1 0.000059
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422860 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738601685s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581542969s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422980 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738601685s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581542969s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738601685s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581542969s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581887245s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738601685s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581542969s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581887245s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] exit Start 0.000077 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738601685s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581542969s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581887245s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581788063s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424766541s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581771851s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] exit Reset 0.000033 1 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.418161 11 0.000093
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581771851s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581771851s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581771851s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581771851s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581771851s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424766541s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.261388 1 0.000052
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268462 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.321897 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.321922 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738469124s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581558228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738449097s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581558228s@ mbc={}] exit Reset 0.000038 1 0.000083
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738449097s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581558228s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738449097s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581558228s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738449097s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581558228s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738449097s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581558228s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738449097s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581558228s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.261248 1 0.000029
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268542 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.321975 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410454 11 0.000080
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422788 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422982 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.322006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410413 11 0.000086
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423018 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589272499s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432479858s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422768 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422886 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738572121s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.581802368s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423057 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589234352s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432495117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738533020s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581802368s@ mbc={}] exit Reset 0.000086 1 0.000138
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.260663 1 0.000046
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738533020s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581802368s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268571 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738533020s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581802368s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.321970 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738533020s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581802368s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738533020s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581802368s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.321998 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738533020s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.581802368s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739022255s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.582336426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739005089s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582336426s@ mbc={}] exit Reset 0.000037 1 0.000061
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739005089s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582336426s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739005089s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582336426s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739005089s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582336426s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410467 11 0.000070
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422685 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422832 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.422896 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589182854s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] exit Reset 0.000072 1 0.000140
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589239120s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432693481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589222908s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] exit Reset 0.000048 1 0.000072
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589182854s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589222908s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589182854s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589222908s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589222908s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589222908s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589182854s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589222908s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.423010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739005089s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582336426s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.739005089s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582336426s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423667 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410573 11 0.000210
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423696 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422974 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423074 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423109 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.261209 1 0.000088
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588839531s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432495117s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.268892 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588819504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] exit Reset 0.000039 1 0.000062
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.322249 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581080437s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.424758911s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588819504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589182854s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] exit Start 0.000131 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588819504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 7.322290 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588819504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.589182854s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588819504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588819504s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432495117s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581054688s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424758911s@ mbc={}] exit Reset 0.000083 1 0.000710
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581054688s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424758911s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581054688s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424758911s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581054688s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424758911s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581054688s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424758911s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.581054688s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.424758911s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410783 11 0.000070
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422942 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423105 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423179 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588906288s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432693481s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588892937s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] exit Reset 0.000029 1 0.000047
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588892937s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410867 11 0.000084
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588892937s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.423104 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588892937s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738539696s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 active pruub 101.582351685s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423238 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588892937s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588892937s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432693481s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410800 11 0.000069
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423269 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422965 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423153 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738516808s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582351685s@ mbc={}] exit Reset 0.000135 1 0.000174
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738516808s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582351685s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423182 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738516808s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582351685s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588840485s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432685852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738516808s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582351685s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738516808s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582351685s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=8.738516808s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 101.582351685s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588824272s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432685852s@ mbc={}] exit Reset 0.000032 1 0.000052
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588879585s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432746887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588824272s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432685852s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588824272s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432685852s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588824272s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432685852s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410758 11 0.000058
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588824272s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432685852s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588863373s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432746887s@ mbc={}] exit Reset 0.000033 1 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422963 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588824272s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432685852s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588863373s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432746887s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423051 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588863373s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432746887s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588863373s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432746887s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423083 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588863373s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432746887s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588863373s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432746887s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410880 11 0.000107
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422969 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423068 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588925362s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432853699s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410891 11 0.000177
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423104 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.423004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588911057s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432853699s@ mbc={}] exit Reset 0.000034 1 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.423092 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588911057s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432853699s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588911057s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432853699s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.423125 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588911057s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432853699s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588911057s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432853699s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588911057s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432853699s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588806152s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432777405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588847160s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432815552s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588789940s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432777405s@ mbc={}] exit Reset 0.000046 1 0.000069
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588831902s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432815552s@ mbc={}] exit Reset 0.000032 1 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588789940s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432777405s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588831902s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432815552s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588789940s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432777405s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588831902s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432815552s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588789940s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432777405s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588831902s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432815552s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.410767 11 0.000064
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588789940s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432777405s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588831902s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432815552s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.422821 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588789940s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432777405s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588831902s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432815552s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.422905 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.422939 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588873863s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 active pruub 105.432914734s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588836670s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432914734s@ mbc={}] exit Reset 0.000067 1 0.000075
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588836670s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432914734s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588836670s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432914734s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588836670s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432914734s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588836670s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432914734s@ mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.588836670s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432914734s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.587801933s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432479858s@ mbc={}] exit Reset 0.001492 1 0.001514
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.587801933s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432479858s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.587801933s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432479858s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.587801933s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432479858s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.587801933s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432479858s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=12.587801933s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY pruub 105.432479858s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000017
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000036
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000015
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000081 1 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000033
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000016
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000098 1 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000129 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000017
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000082 1 0.000032
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000016
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000031
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000065 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000035
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=0 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000072 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000036
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000014
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000109 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000035 1 0.000012
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000095 1 0.000085
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000180 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000012
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000087 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000110 1 0.000159
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000015
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000077 1 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000068 1 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000016
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000048
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000028
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000018
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000033
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000014
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000149 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000028
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000030
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000039
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000075 1 0.000080
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014838 2 0.000031
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014585 2 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014451 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014360 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013796 2 0.000038
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013630 2 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013165 2 0.000032
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013001 2 0.000039
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012716 2 0.000037
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012695 2 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012552 2 0.000018
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011708 2 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012469 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011372 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010668 2 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010510 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011160 2 0.000019
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010371 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008617 2 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008892 2 0.000032
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008516 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008193 2 0.000037
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007519 2 0.000051
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007010 2 0.000049
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008025 2 0.000031
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006127 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005933 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005853 2 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006810 2 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005708 2 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007266 2 0.000030
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005428 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005007 2 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004856 2 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004671 2 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005442 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004415 2 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004147 2 0.000018
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003929 2 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003373 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003695 2 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:19.749127+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 42 heartbeat osd_stat(store_statfs(0x4fe15c000/0x0/0x4ffc00000, data 0x29bdd/0x70000, compress 0x0/0x0/0x0, omap 0x4569, meta 0x1a2ba97), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 483328 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979425 2 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988030 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979588 2 0.000058
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988303 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979160 2 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979017 2 0.000028
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986319 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985241 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978799 2 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985713 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981148 2 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995688 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978641 2 0.000030
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986012 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981302 2 0.000036
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995986 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979414 2 0.000029
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987071 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981457 2 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996420 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981217 2 0.000028
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978584 2 0.000054
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983353 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995663 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980119 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991347 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978500 2 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982996 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.273142 5 0.000081
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.280660 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981018 2 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.334078 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994104 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980464 2 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991933 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 8.334115 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980378 2 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991183 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978673 2 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984185 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726775169s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.582183838s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726726532s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582183838s@ mbc={}] exit Reset 0.000086 1 0.000131
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980752 2 0.000038
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992550 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726726532s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582183838s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726726532s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582183838s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726726532s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582183838s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726726532s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582183838s@ mbc={}] exit Start 0.000013 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726726532s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582183838s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979006 2 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984105 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.273350 5 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.281088 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.334524 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978897 2 0.000049
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 8.334592 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982675 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979397 2 0.000032
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984946 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726365089s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.582260132s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981177 2 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993724 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979694 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985490 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726216316s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582260132s@ mbc={}] exit Reset 0.000260 1 0.000444
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726216316s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582260132s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726216316s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582260132s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980015 2 0.000049
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726216316s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582260132s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986028 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726216316s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582260132s@ mbc={}] exit Start 0.000015 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.726216316s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582260132s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.274480 5 0.000066
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.281607 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.335029 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 8.335056 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979334 2 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983353 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725442886s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.581634521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979642 2 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984565 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725407600s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.581634521s@ mbc={}] exit Reset 0.000066 1 0.000098
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981699 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994467 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725407600s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.581634521s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725407600s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.581634521s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725407600s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.581634521s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725407600s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.581634521s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.725407600s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.581634521s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979521 2 0.000024
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983738 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981206 2 0.000029
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982233 2 0.000148
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991668 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995958 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980296 2 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986216 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982227 2 0.000028
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995062 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982335 2 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995628 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982089 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982571 2 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994715 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996480 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981380 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990363 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981678 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992269 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.278196 5 0.000054
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.282647 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.336014 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 8.336054 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.980912 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989015 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721846581s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 active pruub 109.578636169s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.981200 2 0.000021
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721824646s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.578636169s@ mbc={}] exit Reset 0.000047 1 0.000076
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989497 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721824646s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.578636169s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721824646s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.578636169s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721824646s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.578636169s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721824646s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.578636169s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43 pruub=15.721824646s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.578636169s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979974 2 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983431 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002102 3 0.000103
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015095 7 0.000313
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014858 7 0.000054
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 43 handle_osd_map epochs [42,43], i have 43, src has [1,43]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017436 7 0.000057
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018288 7 0.000057
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018023 7 0.000066
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018378 7 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006746 3 0.000127
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006673 3 0.000061
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006654 3 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006699 3 0.000081
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006758 3 0.000047
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006863 3 0.000158
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006776 3 0.000043
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006975 3 0.000114
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006821 3 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006838 3 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006844 3 0.000040
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006849 3 0.000080
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006738 3 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006753 3 0.000050
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006730 3 0.000091
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006762 3 0.000039
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006895 3 0.000076
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006378 3 0.000061
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006662 3 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006327 3 0.000080
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006267 3 0.000047
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006791 3 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000055 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006279 3 0.000167
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006165 3 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006242 3 0.000069
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006150 3 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006004 3 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006167 3 0.000177
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006110 3 0.000178
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006088 3 0.000414
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006057 3 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006090 3 0.000083
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007599 3 0.000064
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007547 4 0.000054
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007524 3 0.000063
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007554 4 0.000099
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007610 3 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007693 3 0.000048
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007658 3 0.000046
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/18 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007680 3 0.000065
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=39/22 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030485 7 0.000050
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030174 7 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031154 7 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030695 7 0.000041
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030022 7 0.000407
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030858 7 0.000040
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030921 7 0.000088
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029944 7 0.000042
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029768 7 0.000042
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031268 7 0.000057
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029927 7 0.000039
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029874 7 0.000046
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030103 7 0.000042
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000335 1 0.000054
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031267 7 0.000054
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031104 7 0.000120
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000427 1 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000490 1 0.000017
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000607 1 0.000084
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000632 1 0.000017
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000669 1 0.000015
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000709 1 0.000014
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000764 1 0.000058
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000815 1 0.000017
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000857 1 0.000014
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001068 1 0.000187
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000944 1 0.000050
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001131 1 0.000175
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000882 1 0.000048
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000888 1 0.000016
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035039 7 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034706 7 0.000088
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034070 7 0.000137
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034984 7 0.000049
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034903 7 0.000047
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036144 7 0.000051
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000180 1 0.000050
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036417 7 0.000056
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000331 1 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000415 1 0.000027
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000685 1 0.000025
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000745 1 0.000014
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000830 1 0.000134
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000826 1 0.000043
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060362 1 0.000053
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060743 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.1( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.091279 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067803 1 0.000035
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.068279 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.099006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074943 1 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075460 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.106344 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082274 1 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.082926 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.114131 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.089485 1 0.000030
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.090151 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.121113 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096862 1 0.000019
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097556 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.127352 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.104154 1 0.000018
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.104886 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.136184 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111583 1 0.000030
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112412 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.142387 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118830 1 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.119672 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.149625 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126161 1 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127040 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.156939 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.133654 1 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.134769 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.164986 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.140905 1 0.000014
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.141874 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.172030 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:20.749312+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.148340 1 0.000015
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.149507 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.179776 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.155607 1 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.156514 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.187812 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.162933 1 0.000017
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.163846 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.195051 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.166401 1 0.000026
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.166601 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.201678 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.173500 1 0.000052
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.173868 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.208054 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.180755 1 0.000057
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.181221 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.216247 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.187903 1 0.000031
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.188623 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.223558 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.195246 1 0.000023
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.196016 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.232194 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.202742 1 0.000022
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.203637 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.238430 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.209958 1 0.000050
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.210852 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.247319 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.301786 2 0.000150
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.301818 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000116 1 0.000065
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2cedf/0x76000, compress 0x0/0x0/0x0, omap 0x45e1, meta 0x1a2ba1f), peers [1,2] op hist [0,0,1])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.425583 2 0.000064
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.425610 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000128 1 0.000049
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.136583 2 0.000171
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.136735 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.453957 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 188416 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.571515 2 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.571543 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000069 1 0.000052
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.153649 2 0.000211
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.153834 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.594359 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.713899 2 0.000020
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.713935 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000118 1 0.000073
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.146413 2 0.000258
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.146519 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.735546 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.787481 2 0.000721
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.788065 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000136 1 0.000097
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.853736 2 0.000038
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.853769 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000108 1 0.000068
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.144726 2 0.000232
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.144937 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.877202 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.092526 2 0.000139
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.092718 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.898997 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.033745 2 0.000116
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.033888 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 43 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.906082 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016741 6 0.000082
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017523 6 0.000081
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022644 7 0.000119
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021929 7 0.000089
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000153 1 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000232 1 0.000077
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 DELETING pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003286 1 0.000047
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.003481 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.2( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.026190 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 DELETING pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.010616 1 0.000050
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.010879 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.032869 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:21.749458+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.156245 3 0.000055
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.156274 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000091 1 0.000057
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.166811 3 0.000086
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.166840 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000109 1 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 DELETING pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.013479 2 0.000139
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.013613 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.e( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.187470 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 DELETING pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.017605 2 0.000125
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.017766 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 44 pg[6.6( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=43) [1] r=-1 lpr=43 pi=[39,43)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.201412 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 139264 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 337376 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:22.749608+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 122880 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000236 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000040
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000565 1 0.000070
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000182 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000038 1 0.000071
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000219 1 0.000100
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000102 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000077
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000071
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000088 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000034
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000057
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001387 2 0.000061
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.002947 2 0.000072
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001044 2 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002071 2 0.000157
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:23.749764+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 131072 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 45 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008347 2 0.000112
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008336 2 0.000095
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.009605 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.010775 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008421 2 0.000132
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.012041 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008760 2 0.000093
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.010317 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.332122 14 0.000087
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.339955 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.393456 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.393487 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667674065s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 109.582473755s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] exit Reset 0.000070 1 0.000109
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.334982 14 0.000097
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.340821 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.393542 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.393725 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664979935s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 109.580123901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] exit Reset 0.000064 1 0.000102
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002648 4 0.000228
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000136 1 0.000077
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.004810 4 0.000192
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.004910 4 0.000267
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005039 5 0.000228
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.008375 2 0.000109
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.006281 2 0.000080
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:24.749957+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.207840 1 0.000055
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.214120 2 0.000044
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.793719292s of 10.028434753s, submitted: 384
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.110695 1 0.000083
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.324787 1 0.000189
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.081289 1 0.000105
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1056768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004994 6 0.000132
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004685 6 0.000096
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:25.750107+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.186125 3 0.000063
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.186190 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000123 1 0.000132
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.245147 3 0.000076
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.245205 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000088 1 0.000093
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.086250 2 0.000375
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.086454 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.277412 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.041949 2 0.000117
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.042086 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.292352 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1056768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000033
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000207 1 0.000049
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000385 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000064
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000208 1 0.000113
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001304 2 0.000036
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000739 2 0.000124
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:26.750306+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 48 heartbeat osd_stat(store_statfs(0x4fe14a000/0x0/0x4ffc00000, data 0x33333/0x80000, compress 0x0/0x0/0x0, omap 0x96be, meta 0x1a26942), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 974848 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 356640 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.017369 2 0.000075
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.018427 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.018583 2 0.000097
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.020171 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002482 4 0.000214
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000069 1 0.000059
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.006425 4 0.000228
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:27.750456+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067784 2 0.000028
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.063130 2 0.000060
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126551 1 0.000053
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 942080 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 49 handle_osd_map epochs [49,50], i have 49, src has [1,50]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:28.750609+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:58.717228+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.1e scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:58.727821+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.1e scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 13)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:58.717228+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.1e scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:58.727821+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.1e scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 50 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x35e69/0x88000, compress 0x0/0x0/0x0, omap 0x9aed, meta 0x1a26513), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:29.750767+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:30.750942+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:31.751149+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 375323 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 20.420049 33 0.000150
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 20.425826 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.479242 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.479273 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580068588s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 active pruub 117.580322266s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] exit Reset 0.000092 1 0.000170
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 52 handle_osd_map epochs [51,52], i have 52, src has [1,52]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:32.751296+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 786432 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:33.751427+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.192625 6 0.000097
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000168 1 0.000041
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 DELETING pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002214 2 0.000084
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002469 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.195136 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:34.751608+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 53 heartbeat osd_stat(store_statfs(0x4fe137000/0x0/0x4ffc00000, data 0x39f15/0x91000, compress 0x0/0x0/0x0, omap 0x9d08, meta 0x1a262f8), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:35.751788+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 53 heartbeat osd_stat(store_statfs(0x4fe137000/0x0/0x4ffc00000, data 0x39f15/0x91000, compress 0x0/0x0/0x0, omap 0x9d08, meta 0x1a262f8), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:36.752071+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 378241 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:37.752257+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:38.752394+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 745472 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.348392487s of 14.436017990s, submitted: 40
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=0 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000107 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=0 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000037
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000181 1 0.000045
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001868 2 0.000043
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:39.752573+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 54 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.477245 2 0.000070
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.479378 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002872 4 0.000204
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:40.752771+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 55 heartbeat osd_stat(store_statfs(0x4fe131000/0x0/0x4ffc00000, data 0x3c9ab/0x97000, compress 0x0/0x0/0x0, omap 0x9e89, meta 0x1a26177), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a(unlocked)] enter Initial
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000032
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000193 1 0.000042
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000502 2 0.000065
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 671744 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:41.752960+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.921770 2 0.000035
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.922518 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001848 3 0.000125
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 630784 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 391685 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:42.753081+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:43.753236+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 57 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x3f441/0x9d000, compress 0x0/0x0/0x0, omap 0x9fbd, meta 0x1a26043), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 57 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 19.701046 36 0.000230
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 19.712445 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 20.722075 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 20.722117 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290416718s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 active pruub 129.917236328s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] exit Reset 0.000130 1 0.000203
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] exit Start 0.000016 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:44.753401+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:13.843435+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:13.854006+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 15)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:13.843435+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:13.854006+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.531475 7 0.000444
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.012176 2 0.000063
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.012218 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000106 1 0.000080
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 DELETING pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.009484 2 0.000205
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.009651 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.553422 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:45.753648+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:46.753826+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 540672 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 399587 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:47.754126+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:16.880504+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:16.891058+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 20.068285 33 0.000372
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 20.138829 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 21.157277 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 21.157351 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863982201s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 active pruub 132.959640503s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] exit Reset 0.000115 1 0.000175
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 60 heartbeat osd_stat(store_statfs(0x4fe125000/0x0/0x4ffc00000, data 0x42075/0xa3000, compress 0x0/0x0/0x0, omap 0x89c5, meta 0x1a2763b), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 17)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:16.880504+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:16.891058+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.640263 6 0.000106
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.118565 3 0.000087
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.118639 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000085 1 0.000100
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 DELETING pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.017222 2 0.000296
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.017377 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.776345 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:48.754464+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 499712 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:49.754695+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:18.914339+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.19 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:18.924859+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.19 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.497750282s of 10.585931778s, submitted: 36
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 19)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:18.914339+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.19 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:18.924859+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.19 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:50.754967+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:19.918476+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:19.929167+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 21)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:19.918476+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:19.929167+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 417792 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:51.755241+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 410126 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 62 heartbeat osd_stat(store_statfs(0x4fe11b000/0x0/0x4ffc00000, data 0x460db/0xab000, compress 0x0/0x0/0x0, omap 0x8b7c, meta 0x1a27484), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 62 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 27.693333 52 0.000336
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 27.912475 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 28.922831 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 28.922888 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092677116s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 active pruub 137.919769287s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] exit Reset 0.000200 1 0.000321
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] enter Started
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] enter Start
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] exit Start 0.000055 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] enter Started/Stray
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:52.755380+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:21.934406+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.0 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:21.944819+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.0 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 23)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:21.934406+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.0 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:21.944819+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.0 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.415573 6 0.000210
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138296 3 0.000075
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.138348 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000236 1 0.000102
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.025018 2 0.000230
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.025358 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.579425 0 0.000000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:53.755648+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:23.030935+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:23.100861+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 25)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:23.030935+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:23.100861+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:54.755875+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:55.756262+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:25.053318+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:25.063861+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 27)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:25.053318+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:25.063861+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:56.756546+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416133 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:57.756710+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:26.992688+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.16 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:27.003225+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.16 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 29)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:26.992688+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.16 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:27.003225+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.16 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:58.756944+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:28.001516+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.17 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:28.012082+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.17 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 31)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:28.001516+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.17 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:28.012082+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.17 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:59.757337+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:00.757658+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:01.757881+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418546 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:02.758093+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:03.758358+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:04.758521+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:05.758708+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:06.758930+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418546 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:07.759115+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.146697998s of 18.192071915s, submitted: 21
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 237568 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:08.759326+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:38.110524+0000 osd.0 (osd.0) 32 : cluster [DBG] 7.1b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:38.121091+0000 osd.0 (osd.0) 33 : cluster [DBG] 7.1b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 33)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:38.110524+0000 osd.0 (osd.0) 32 : cluster [DBG] 7.1b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:38.121091+0000 osd.0 (osd.0) 33 : cluster [DBG] 7.1b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:09.759617+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:10.759850+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:11.760012+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423372 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:12.760293+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:42.043953+0000 osd.0 (osd.0) 34 : cluster [DBG] 5.14 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:42.054491+0000 osd.0 (osd.0) 35 : cluster [DBG] 5.14 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 35)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:42.043953+0000 osd.0 (osd.0) 34 : cluster [DBG] 5.14 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:42.054491+0000 osd.0 (osd.0) 35 : cluster [DBG] 5.14 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:13.760555+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:14.760693+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:44.109010+0000 osd.0 (osd.0) 36 : cluster [DBG] 5.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:44.119596+0000 osd.0 (osd.0) 37 : cluster [DBG] 5.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 37)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:44.109010+0000 osd.0 (osd.0) 36 : cluster [DBG] 5.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:44.119596+0000 osd.0 (osd.0) 37 : cluster [DBG] 5.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:15.760888+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:16.761029+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 147456 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 428198 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:17.761210+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:47.084756+0000 osd.0 (osd.0) 38 : cluster [DBG] 2.13 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:47.095286+0000 osd.0 (osd.0) 39 : cluster [DBG] 2.13 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 39)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:47.084756+0000 osd.0 (osd.0) 38 : cluster [DBG] 2.13 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:47.095286+0000 osd.0 (osd.0) 39 : cluster [DBG] 2.13 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:18.761478+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:48.073857+0000 osd.0 (osd.0) 40 : cluster [DBG] 2.11 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:48.084427+0000 osd.0 (osd.0) 41 : cluster [DBG] 2.11 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 41)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:48.073857+0000 osd.0 (osd.0) 40 : cluster [DBG] 2.11 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:48.084427+0000 osd.0 (osd.0) 41 : cluster [DBG] 2.11 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:19.761791+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.945369720s of 11.965550423s, submitted: 10
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:20.761998+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:50.076204+0000 osd.0 (osd.0) 42 : cluster [DBG] 3.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:50.086698+0000 osd.0 (osd.0) 43 : cluster [DBG] 3.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 43)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:50.076204+0000 osd.0 (osd.0) 42 : cluster [DBG] 3.15 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:50.086698+0000 osd.0 (osd.0) 43 : cluster [DBG] 3.15 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:21.762204+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433024 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:22.762344+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:23.762447+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:24.762590+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:54.088634+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.16 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:54.098996+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.16 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 45)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:54.088634+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.16 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:54.098996+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.16 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:25.762783+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:26.762924+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435437 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:27.763109+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:28.763299+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:29.763492+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:59.074945+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:59.085482+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 47)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:59.074945+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:59.085482+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:30.763729+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:31.763902+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437850 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:32.764090+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:33.764221+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:34.764404+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.023485184s of 15.041648865s, submitted: 6
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:35.764589+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:05.117904+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.17 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:05.128478+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.17 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 49)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:05.117904+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.17 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:05.128478+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.17 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:36.764851+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442676 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:37.764995+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:07.076115+0000 osd.0 (osd.0) 50 : cluster [DBG] 7.13 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:07.086687+0000 osd.0 (osd.0) 51 : cluster [DBG] 7.13 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 49152 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 51)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:07.076115+0000 osd.0 (osd.0) 50 : cluster [DBG] 7.13 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:07.086687+0000 osd.0 (osd.0) 51 : cluster [DBG] 7.13 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:38.765161+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:08.087875+0000 osd.0 (osd.0) 52 : cluster [DBG] 7.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:08.098574+0000 osd.0 (osd.0) 53 : cluster [DBG] 7.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 40960 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:39.765408+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 55 sent 53 num 4 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:09.051158+0000 osd.0 (osd.0) 54 : cluster [DBG] 7.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:09.061745+0000 osd.0 (osd.0) 55 : cluster [DBG] 7.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 53)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:08.087875+0000 osd.0 (osd.0) 52 : cluster [DBG] 7.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:08.098574+0000 osd.0 (osd.0) 53 : cluster [DBG] 7.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 24576 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:40.765756+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 57 sent 55 num 4 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:10.084395+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:10.094970+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 55)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:09.051158+0000 osd.0 (osd.0) 54 : cluster [DBG] 7.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:09.061745+0000 osd.0 (osd.0) 55 : cluster [DBG] 7.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 57)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:10.084395+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:10.094970+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 16384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:41.765916+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 1048576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452320 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:42.766049+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:12.115437+0000 osd.0 (osd.0) 58 : cluster [DBG] 3.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:12.126011+0000 osd.0 (osd.0) 59 : cluster [DBG] 3.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 59)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:12.115437+0000 osd.0 (osd.0) 58 : cluster [DBG] 3.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:12.126011+0000 osd.0 (osd.0) 59 : cluster [DBG] 3.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 1048576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:43.766227+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 1048576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:44.766352+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 1040384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:45.766487+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 1040384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:46.766633+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.991403580s of 12.012083054s, submitted: 12
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1007616 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454731 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:47.766820+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:17.129949+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.2 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:17.140513+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.2 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 61)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:17.129949+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.2 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:17.140513+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.2 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1007616 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:48.767036+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1007616 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:49.767356+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 999424 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:50.767601+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 999424 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:51.767763+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:21.106688+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.5 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:21.117229+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.5 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 63)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:21.106688+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.5 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:21.117229+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.5 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 983040 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 457142 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:52.768062+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:53.768250+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 958464 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:54.768398+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 950272 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:55.768555+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:25.091932+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:25.102551+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 65)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:25.091932+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:25.102551+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 950272 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:56.768730+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:26.096831+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:26.107407+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 67)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:26.096831+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.6 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:26.107407+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.6 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464375 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:57.769117+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:27.053389+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:27.063973+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 69)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:27.053389+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:27.063973+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:58.769362+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 933888 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:59.769542+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 933888 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:00.769787+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 933888 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:01.769927+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 925696 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464375 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:02.770118+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 925696 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:03.770300+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.921869278s of 16.941867828s, submitted: 10
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 925696 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:04.770442+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:34.071853+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.7 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:34.082380+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.7 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 917504 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 71)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:34.071853+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.7 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:34.082380+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.7 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:05.770593+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 909312 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:06.770741+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:36.025690+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.2 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:36.036239+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.2 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 892928 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471608 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:07.770914+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 75 sent 73 num 4 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:37.017722+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.9 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:37.028305+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.9 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 73)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:36.025690+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.2 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:36.036239+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.2 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 884736 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:08.771072+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 77 sent 75 num 4 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:38.055242+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:38.065818+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 75)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:37.017722+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.9 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:37.028305+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.9 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 77)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:38.055242+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.c scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:38.065818+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.c scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 868352 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:09.772979+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 868352 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:10.773214+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 868352 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:11.773393+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:41.078483+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:41.089065+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 79)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:41.078483+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:41.089065+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 860160 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476430 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:12.773641+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 851968 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:13.773789+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 843776 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:14.773905+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 843776 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:15.774020+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 843776 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:16.774152+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476430 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:17.774237+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:18.774375+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:19.774517+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:20.774712+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:21.774958+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.228290558s of 17.986967087s, submitted: 10
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 802816 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478841 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:22.775098+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:52.058771+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:52.069262+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 802816 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 81)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:52.058771+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:52.069262+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:23.775307+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:53.102610+0000 osd.0 (osd.0) 82 : cluster [DBG] 7.18 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:53.113107+0000 osd.0 (osd.0) 83 : cluster [DBG] 7.18 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 778240 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 83)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:53.102610+0000 osd.0 (osd.0) 82 : cluster [DBG] 7.18 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:53.113107+0000 osd.0 (osd.0) 83 : cluster [DBG] 7.18 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:24.775588+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:54.057259+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:54.067833+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 778240 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 85)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:54.057259+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:54.067833+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:25.775851+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:55.033356+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.1b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:55.043949+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.1b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 778240 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 87)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:55.033356+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.1b scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:55.043949+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.1b scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:26.776123+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 770048 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486078 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:27.776303+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 745472 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:28.776463+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:58.034854+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.4 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:58.045567+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.4 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 737280 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 89)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:58.034854+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.4 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:58.045567+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.4 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:29.776713+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 737280 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:30.776884+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 729088 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:31.777097+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.010351181s of 10.056418419s, submitted: 10
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 704512 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490902 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:32.777289+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:02.115277+0000 osd.0 (osd.0) 90 : cluster [DBG] 7.1f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:02.125843+0000 osd.0 (osd.0) 91 : cluster [DBG] 7.1f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 91)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:02.115277+0000 osd.0 (osd.0) 90 : cluster [DBG] 7.1f scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:02.125843+0000 osd.0 (osd.0) 91 : cluster [DBG] 7.1f scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 704512 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:33.777582+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:03.070397+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.18 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:03.080956+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.18 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 93)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:03.070397+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.18 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:03.080956+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.18 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 688128 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:34.777886+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:04.117364+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:04.127391+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 95)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:04.117364+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:04.127391+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 688128 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:35.778097+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:05.129451+0000 osd.0 (osd.0) 96 : cluster [DBG] 5.1e scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:05.139989+0000 osd.0 (osd.0) 97 : cluster [DBG] 5.1e scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 97)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:05.129451+0000 osd.0 (osd.0) 96 : cluster [DBG] 5.1e scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:05.139989+0000 osd.0 (osd.0) 97 : cluster [DBG] 5.1e scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 688128 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:36.778312+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 679936 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498141 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:37.778489+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 679936 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:38.778723+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 671744 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:39.778886+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 671744 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:40.779126+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 671744 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:41.779311+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.016089439s of 10.036548615s, submitted: 8
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 655360 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500552 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:42.779475+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:12.151863+0000 osd.0 (osd.0) 98 : cluster [DBG] 6.0 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:12.176572+0000 osd.0 (osd.0) 99 : cluster [DBG] 6.0 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 99)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:12.151863+0000 osd.0 (osd.0) 98 : cluster [DBG] 6.0 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:12.176572+0000 osd.0 (osd.0) 99 : cluster [DBG] 6.0 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 655360 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:43.779662+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 630784 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:44.779829+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:14.195575+0000 osd.0 (osd.0) 100 : cluster [DBG] 6.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:14.213233+0000 osd.0 (osd.0) 101 : cluster [DBG] 6.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 101)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:14.195575+0000 osd.0 (osd.0) 100 : cluster [DBG] 6.3 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:14.213233+0000 osd.0 (osd.0) 101 : cluster [DBG] 6.3 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 630784 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:45.780416+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 622592 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:46.780555+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 622592 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502963 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:47.780744+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 614400 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:48.780906+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 606208 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:49.781108+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 606208 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:50.781332+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 606208 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:51.781464+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 598016 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502963 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:52.781621+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 581632 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:53.781785+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.150177956s of 12.157385826s, submitted: 4
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 557056 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:54.781953+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:24.309241+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.7 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:24.323531+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.7 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 103)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:24.309241+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.7 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:24.323531+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.7 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 548864 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:55.782162+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:25.300302+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.9 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:25.310958+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.9 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 105)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:25.300302+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.9 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:25.310958+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.9 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 540672 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:56.782491+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:26.274103+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.a scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:26.284647+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.a scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 107)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:26.274103+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.a scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:26.284647+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.a scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 540672 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:57.782774+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:27.285783+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.5 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:27.303483+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.5 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 109)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:27.285783+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.5 scrub starts
Dec 01 20:56:05 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:27.303483+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.5 scrub ok
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 532480 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:58.783025+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 524288 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:59.783205+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 524288 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:00.783422+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 516096 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:01.783624+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 507904 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:02.783833+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 499712 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:03.783983+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 491520 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:04.784353+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 491520 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:05.784628+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 483328 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:06.786018+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 483328 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:07.786266+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 475136 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:08.786423+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 475136 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:09.786572+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 475136 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:10.786797+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 466944 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:11.786998+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 466944 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:12.787264+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 466944 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:13.787473+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 458752 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:14.787619+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 458752 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:15.787819+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 450560 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:16.787971+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 450560 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:17.788161+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 442368 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:18.788365+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 434176 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:19.788559+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 434176 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:20.788999+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 425984 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:21.789137+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 425984 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:22.789280+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 417792 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:23.789452+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 401408 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:24.789692+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 401408 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:25.789856+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 393216 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:26.790025+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 393216 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:27.790243+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 393216 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:28.790412+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 376832 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:29.790604+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 376832 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:30.790846+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 368640 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:31.790986+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 368640 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:32.791119+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 368640 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:33.791319+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 352256 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:34.791471+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 352256 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:35.791675+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:36.791878+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:37.792058+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 352256 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:38.792423+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:39.792571+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:40.792778+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 335872 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:41.792926+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 335872 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:42.793073+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 327680 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:43.793237+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 327680 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:44.793428+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 327680 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:45.793552+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 319488 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:46.793732+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 319488 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:47.793852+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 311296 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:48.793954+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 311296 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:49.794100+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 311296 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:50.794289+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 303104 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:51.794452+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 303104 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:52.794566+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 294912 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:53.794704+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 294912 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:54.794870+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:55.795007+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 286720 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:56.795167+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 278528 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:57.795303+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 278528 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:58.795435+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 262144 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:59.795558+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 253952 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:00.795723+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 253952 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:01.795835+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 245760 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:02.795958+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 245760 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:03.796053+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 237568 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:04.796216+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 229376 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:05.796352+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 229376 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:06.796491+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 229376 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:07.796619+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 221184 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:08.796740+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 221184 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:09.796861+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 212992 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:10.797030+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 212992 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:11.797159+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 204800 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:12.797324+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 204800 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:13.797458+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 204800 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:14.797632+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 196608 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:15.797888+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 188416 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:16.798049+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 180224 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:17.798262+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 180224 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:18.798422+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 155648 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:19.798635+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 147456 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:20.799407+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 147456 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:21.801529+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 139264 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:22.803175+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 139264 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:23.803927+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 139264 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:24.805125+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 131072 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:25.805305+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 131072 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:26.805450+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 122880 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:27.805866+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 122880 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:28.806437+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 122880 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:29.806599+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 114688 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:30.807448+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 114688 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:31.807607+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 106496 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:32.808409+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 106496 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:33.808853+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 106496 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:34.808989+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 98304 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:35.809364+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 98304 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:36.809612+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 90112 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:37.810231+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 90112 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:38.810408+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 65536 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:39.810626+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 65536 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:40.810834+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 65536 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:41.811008+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 57344 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:42.811212+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 57344 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:43.811465+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 57344 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:44.811667+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 49152 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:45.811804+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 49152 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:46.811956+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 40960 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:47.812091+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 40960 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:48.812362+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 32768 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:49.812578+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 32768 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:50.812740+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 32768 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:51.812945+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 24576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:52.813126+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 24576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:53.813273+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 24576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:54.813430+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 16384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:55.813605+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 16384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:56.813755+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 8192 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:57.813906+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 8192 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:58.814036+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1032192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:59.814224+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1032192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:00.814417+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1032192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:01.814619+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1024000 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:02.814737+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1024000 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:03.814890+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1015808 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:04.815079+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1007616 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:05.815252+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1007616 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:06.815427+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 999424 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:07.815578+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 999424 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:08.815739+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 999424 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:09.815902+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 991232 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:10.816104+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 991232 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:11.816257+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 991232 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:12.816406+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 983040 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:13.816584+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 983040 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:14.816720+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 974848 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:15.816830+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 974848 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:16.816990+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 966656 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:17.817137+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 966656 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:18.817274+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 966656 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:19.817428+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 958464 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:20.817645+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 958464 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:21.817794+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:22.817929+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:23.818101+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 958464 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:24.818253+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:25.818390+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:26.819069+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:27.819223+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 942080 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:28.819352+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 942080 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:29.819481+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 942080 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:30.819658+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 933888 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:31.819841+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 933888 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:32.820119+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 925696 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:33.820253+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 925696 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:34.820378+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 917504 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:35.820572+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 917504 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:36.820764+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 917504 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:37.820940+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 909312 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:38.821125+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 909312 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:39.821243+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 901120 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:40.821402+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 901120 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:41.821527+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 901120 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:42.821670+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 892928 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:43.821825+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 892928 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:44.822007+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 892928 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:45.822128+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 884736 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:46.822265+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 884736 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:47.822403+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 876544 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:48.822744+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 876544 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:49.822875+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 876544 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:50.823053+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 868352 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:51.823231+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 868352 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:52.823375+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 868352 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:53.823525+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 860160 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:54.823657+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 860160 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:55.823812+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 851968 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:56.824017+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 851968 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:57.824171+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:58.824415+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:59.824582+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:00.824815+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:01.824954+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 835584 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:02.825137+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 835584 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:03.825287+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 827392 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:04.825471+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 827392 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:05.825625+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 819200 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:06.826053+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 819200 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:07.826360+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 819200 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:08.826553+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 811008 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:09.826748+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 811008 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:10.826942+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 802816 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:11.827129+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 802816 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:12.827266+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 802816 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:13.827393+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 794624 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:14.827594+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 794624 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:15.827788+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 794624 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:16.827908+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 786432 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:17.828077+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 786432 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:18.828244+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 770048 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:19.828430+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 770048 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:20.828605+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 761856 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:21.828734+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 761856 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:22.828887+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 761856 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:23.829053+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 753664 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:24.829246+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 753664 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:25.829367+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 753664 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:26.829527+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 745472 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:27.829675+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 745472 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:28.829850+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 745472 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:29.830009+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 737280 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:30.830177+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 737280 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:31.830787+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 729088 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:32.831069+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 729088 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:33.831268+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 729088 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:34.831395+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 720896 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:35.831585+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 720896 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:36.831913+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 712704 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:37.832079+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 712704 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:38.832636+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 704512 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:39.832750+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 704512 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:40.833028+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 704512 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:41.833262+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 696320 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:42.833439+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 696320 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:43.833681+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 696320 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:44.833957+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 688128 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:45.834124+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 688128 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:46.834353+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 679936 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:47.834517+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 679936 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:48.834659+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 679936 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:49.834802+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 671744 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:50.834988+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 671744 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:51.835256+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 671744 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:52.835385+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 663552 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:53.835588+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 663552 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:54.835721+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 655360 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:55.835850+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 655360 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:56.836094+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 647168 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:57.836249+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 647168 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:58.836388+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 647168 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:59.836557+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 638976 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:00.836747+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 638976 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:01.836915+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 630784 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:02.837056+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 630784 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:03.837168+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 630784 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:04.837298+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 622592 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:05.837417+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 622592 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:06.837615+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 622592 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:07.837826+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 614400 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:08.837969+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 614400 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:09.838085+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 614400 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:10.838517+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 606208 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:11.838614+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 606208 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:12.838769+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 598016 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:13.838935+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 598016 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:14.839080+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 598016 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:15.839242+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 589824 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:16.839394+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 589824 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:17.839533+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 589824 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:18.839671+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 581632 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:19.839826+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 581632 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:20.840001+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 573440 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:21.840330+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 573440 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:22.840452+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 573440 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:23.840759+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 565248 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:24.840966+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 565248 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:25.841137+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 557056 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:26.841273+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 557056 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:27.841480+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 557056 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:28.841662+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 548864 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:29.841831+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 548864 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:30.842001+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 548864 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:31.842123+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:32.842270+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:33.842408+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:34.842720+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:35.843040+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:36.843728+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 524288 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:37.844111+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 524288 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:38.844279+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 516096 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:39.844508+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 516096 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:40.844694+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 516096 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:41.844899+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 507904 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:42.845082+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 507904 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:43.845440+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:44.845743+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:45.846103+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:46.846400+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:47.846671+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:48.846902+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:49.847138+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 483328 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:50.847393+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 483328 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:51.847575+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:52.847788+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:53.847922+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:54.848065+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:55.848233+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:56.848373+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 458752 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:57.848605+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:58.848742+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:59.848865+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:00.849045+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:01.849280+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 483328 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:02.849397+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:03.849618+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:04.849746+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:05.849868+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:06.850394+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:07.850578+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 458752 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:08.850715+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 458752 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:09.850853+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 450560 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:10.851013+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 450560 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:11.851199+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 442368 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:12.851349+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 442368 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:13.851475+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 442368 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:14.851609+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 434176 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:15.851779+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 434176 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:16.852366+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 434176 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:17.852490+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 425984 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:18.852628+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 425984 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:19.852740+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 417792 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:20.852938+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 417792 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:21.853092+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 417792 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:22.853266+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 409600 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:23.853399+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 409600 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:24.853575+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 409600 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:25.853719+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 401408 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:26.853950+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 401408 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:27.854107+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 393216 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:28.854261+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 393216 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:29.854443+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 385024 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:30.854699+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 385024 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:31.854899+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 385024 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:32.855035+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 376832 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:33.855261+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 376832 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:34.855389+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 376832 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:35.855514+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 368640 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:36.855747+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 368640 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:37.856027+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 360448 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:38.856228+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 360448 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:39.856370+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 360448 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:40.856643+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 352256 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:41.856792+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 352256 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:42.856940+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 352256 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:43.858284+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 344064 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:44.858483+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 344064 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:45.859721+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 16.42 MB, 0.03 MB/s
                                           Interval WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 270336 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:46.859887+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 270336 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:47.860261+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:48.860408+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 270336 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:49.860516+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 262144 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:50.860754+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 262144 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:51.860913+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 262144 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:52.861173+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 253952 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:53.861417+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 253952 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:54.861756+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 245760 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:55.861930+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 245760 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:56.862063+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 237568 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:57.862233+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 229376 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:58.862505+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 229376 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:59.862667+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 221184 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:00.862970+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 221184 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:01.863234+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 221184 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:02.863498+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 212992 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:03.863719+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 212992 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:04.863970+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 212992 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:05.864092+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 204800 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:06.864350+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 204800 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:07.864557+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 204800 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:08.864744+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 196608 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:09.864903+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 196608 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:10.865129+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 188416 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:11.865267+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 188416 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:12.865439+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 188416 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:13.865612+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 180224 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:14.865833+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 180224 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:15.866039+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 172032 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:16.866191+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 172032 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:17.866345+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 163840 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:18.866528+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 163840 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:19.866704+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 163840 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:20.866920+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 155648 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:21.867043+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 155648 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:22.867248+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 155648 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:23.867417+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 147456 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:24.867550+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 147456 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:25.867766+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 139264 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:26.867930+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 139264 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:27.868081+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 139264 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:28.868423+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 131072 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:29.868587+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 131072 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:30.868951+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 122880 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:31.869129+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 122880 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:32.869379+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 122880 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:33.869578+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 114688 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:34.869743+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 114688 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:35.869916+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 114688 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:36.870078+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 106496 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:37.870232+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 106496 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:38.870556+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 106496 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:39.870731+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 98304 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:40.870986+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 98304 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:41.871119+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 90112 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:42.871426+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 90112 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:43.871633+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 90112 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:44.871768+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 81920 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:45.871914+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 81920 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:46.872045+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 73728 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:47.872172+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 73728 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:48.872309+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 73728 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:49.872430+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 65536 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:50.872583+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 65536 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:51.872724+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 57344 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:52.872879+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 57344 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:53.873001+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 57344 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:54.873134+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 49152 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:55.873235+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 49152 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:56.873360+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 49152 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:57.873497+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 40960 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:58.873624+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 40960 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:59.873766+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 40960 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:00.873921+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 32768 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:01.874061+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 32768 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:02.874234+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 24576 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:03.874369+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 24576 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:04.874492+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 16384 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:05.874628+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 16384 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:06.874756+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 16384 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:07.874883+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 8192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:08.875008+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 8192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:09.875137+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 8192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:10.875328+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 0 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:11.875453+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 0 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:12.875573+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1040384 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:13.875719+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1040384 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:14.875875+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1040384 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:15.875954+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1032192 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:16.876086+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1032192 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:17.876206+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1024000 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:18.876319+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1024000 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:19.876431+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1015808 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:20.878343+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1015808 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:21.878452+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1015808 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:22.878616+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1007616 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:23.878772+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1007616 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:24.878907+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 999424 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:25.881451+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 999424 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:26.881569+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 999424 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:27.881693+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 991232 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:28.881838+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 991232 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:29.881972+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 991232 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:30.882130+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 983040 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:31.882248+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 983040 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:32.882389+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 974848 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:33.882528+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 974848 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:34.882662+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 974848 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:35.882798+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 966656 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:36.882939+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 966656 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:37.883060+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 966656 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:38.883192+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 958464 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:39.883314+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 958464 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:40.883468+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 950272 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:41.883575+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 950272 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:42.883709+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 942080 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:43.883904+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 942080 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:44.884099+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 942080 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:45.884278+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:46.885238+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:47.885366+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:48.885771+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:49.885980+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:50.886153+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:51.886270+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:52.886406+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:53.886571+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:54.886781+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:55.886918+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:56.887055+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:57.887219+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:58.887381+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:59.887544+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:00.888088+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:01.888262+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:02.888404+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:03.888563+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:04.888790+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:05.889204+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:06.889362+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:07.889973+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:08.890122+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:09.890582+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:10.890771+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:11.890904+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:12.891156+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:13.891230+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:14.891363+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:15.891509+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:16.891658+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:17.891804+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:18.891988+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:19.892161+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:20.892353+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:21.892520+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:22.892681+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:23.892811+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:24.892952+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:25.893105+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:26.893250+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:27.893359+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:28.893467+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:29.893610+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:30.893754+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:31.893916+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:32.894043+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:33.894245+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:34.894380+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:35.894511+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:36.894715+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:37.894850+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:38.894990+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:39.895127+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:40.895356+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:41.895512+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:42.895686+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:43.895823+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:44.896017+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:45.896156+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:46.896278+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:47.896456+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:48.896578+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:49.896702+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:50.896860+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:51.897042+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:52.897156+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:53.897241+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:54.897390+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:55.897505+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:56.897591+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:57.897730+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:58.897846+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:59.897963+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:00.898096+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:01.898252+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:02.898411+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:03.898538+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:04.898666+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:05.898805+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:06.898949+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:07.899080+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:08.899197+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:09.899337+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:10.899483+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:11.899637+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:12.899767+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:13.899931+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:14.900049+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:15.900202+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:16.900324+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:17.900455+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:18.900598+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:19.900755+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:20.900911+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:21.901062+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:22.901262+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:23.901435+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:24.901613+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:25.901764+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:27.856360+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:28.856522+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:29.856748+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:30.856894+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:31.857059+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:32.857215+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:33.857347+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:34.857467+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:35.857647+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:36.857846+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:37.857958+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:38.858098+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:39.858246+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:40.858368+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:41.858535+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:42.858696+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:43.858801+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:44.858886+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:45.858958+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:46.859113+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:47.859251+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:48.859465+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:49.859597+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:50.859737+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:51.859965+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:52.860293+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:53.860452+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:54.860570+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:55.860705+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:56.860893+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:57.861071+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:58.862036+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:59.862232+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:00.862484+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:01.862693+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:02.862859+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:03.863003+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:04.863165+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:05.863348+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:06.863473+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:07.863604+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:08.863716+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:09.863859+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:10.863982+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:11.864144+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:12.864252+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:13.864607+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:14.864874+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:15.865078+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:16.865262+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:17.865456+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:18.865613+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:19.865746+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:20.865858+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:21.866122+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:22.866270+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:23.866413+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:24.866634+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:25.866784+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:26.866931+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:27.867053+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:28.867206+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:29.867335+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:30.867460+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:31.867982+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:32.868146+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:33.868291+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:34.869143+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:35.869425+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:36.869556+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:37.869669+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:38.869794+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:39.870096+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:40.870405+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:41.870614+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:42.870768+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:43.870920+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:44.871122+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:45.871241+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:46.871392+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:47.871553+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:48.871701+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:49.871868+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:50.872001+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:51.872252+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:52.872416+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: mgrc ms_handle_reset ms_handle_reset con 0x5569201c0000
Dec 01 20:56:05 compute-0 ceph-osd[86634]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 20:56:05 compute-0 ceph-osd[86634]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: get_auth_request con 0x55691f5c6800 auth_method 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: mgrc handle_mgr_configure stats_period=5
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:53.872544+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:54.872781+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:55.872919+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:56.873055+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:57.873248+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 ms_handle_reset con 0x55691f5c7000 session 0x55691fe961c0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f5c6400
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:58.873420+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 647168 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:59.873601+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:00.873740+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:01.873950+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:02.874076+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:03.874217+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:04.874584+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:05.874753+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:06.874875+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:07.874989+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:08.875267+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:09.915212+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:10.915390+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:11.915654+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:12.915831+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:13.915958+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:14.916132+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:15.916323+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:16.916519+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:17.916676+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:18.916898+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:19.917112+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:20.917260+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:21.917422+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:22.917547+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:23.917675+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:24.917845+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:25.918066+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:26.918287+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:27.918438+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:28.918574+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:29.918693+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:30.918828+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:31.919019+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:32.919212+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:33.919356+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:34.919565+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:35.919683+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:36.919817+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:37.920015+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:38.920284+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:39.920448+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:40.920591+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:41.920727+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:42.920867+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:43.921016+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:44.921160+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:45.921322+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:46.921460+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:47.921628+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:48.921782+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:49.921992+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:50.922154+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:51.922369+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:52.922501+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:53.922619+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:54.922824+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:55.923030+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:56.923152+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:57.923238+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:58.923367+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:59.923550+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:00.923718+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:01.923856+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:02.923969+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:03.924077+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:04.924274+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:05.924388+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:06.924574+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:07.924709+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:08.924888+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:09.925145+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:10.925254+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:11.925391+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:12.925520+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:13.925692+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:14.925821+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:15.925968+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:16.926106+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:17.926302+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:18.926500+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:19.926645+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:20.926833+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:21.927755+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:22.927900+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:23.928029+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:24.928149+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:25.928271+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:26.928520+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:27.928678+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:28.928819+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:29.929010+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:30.929162+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:31.929424+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:32.929606+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:33.929794+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:34.929978+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:35.930138+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:36.930330+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:37.930483+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:38.930643+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:39.930777+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:40.930955+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:41.931288+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:42.931517+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:43.931730+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:44.931909+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:45.932059+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:46.932266+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:47.932398+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:48.932571+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:49.932749+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:50.932921+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:51.933128+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:52.933277+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:53.933422+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:54.933552+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:55.933713+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:56.933922+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:57.934123+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:58.934287+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:59.934472+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:00.934667+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:01.934880+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:02.935094+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:03.935294+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:04.935470+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:05.935665+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:06.935812+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:07.935964+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:08.936113+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:09.936498+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:10.936814+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:11.937007+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:12.937939+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:13.938378+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:14.938530+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:15.939325+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:16.939548+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:17.939709+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:18.939836+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:19.940266+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:20.940404+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:21.940543+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:22.940659+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:23.940817+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:24.940983+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:25.941150+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:26.941369+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:27.941578+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:28.941808+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:29.941963+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:30.942174+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:31.942497+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:32.942680+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:33.942827+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:34.943039+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:35.943265+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:36.943488+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:37.943696+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:38.944011+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:39.944300+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:40.944591+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:41.944880+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:42.944986+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:43.945175+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:44.945317+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:45.945462+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:46.945599+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:47.945673+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:48.945823+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:49.945942+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:50.946060+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:51.946229+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:52.946349+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:53.946491+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:54.946639+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:05 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:55.946800+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:56.946972+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:57.956998+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:58.957163+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:05 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:59.957359+0000)
Dec 01 20:56:05 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:05 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:00.957480+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:01.957639+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:02.957794+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:03.957915+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:04.958111+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:05.958299+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:06.958464+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:07.958597+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:08.958728+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:09.958846+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:10.959027+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:11.959244+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:12.959397+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:13.959567+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:14.959724+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:15.959893+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:16.960054+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:17.960385+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:18.960556+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:19.960758+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:20.961071+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:21.961333+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:22.961481+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:23.961611+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:24.961831+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:25.961960+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:26.962135+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:27.962228+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:28.962373+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:29.962560+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:30.962753+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:31.962988+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:32.963235+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:33.963477+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:34.963651+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:35.963788+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:36.963929+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:37.964093+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:38.964261+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:39.964381+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:40.964522+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:41.964695+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:42.964813+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:43.964958+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:44.965262+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:45.965496+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:46.965722+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:47.965893+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:48.966107+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:49.966342+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:50.966692+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:51.966876+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:52.967044+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:53.967167+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:54.967298+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:55.967483+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:56.967717+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:57.967956+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:58.968134+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:59.968447+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:00.968591+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:01.968733+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:02.968861+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:03.969079+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:04.969250+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:05.969368+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:06.969485+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:07.969612+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:08.969733+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:09.969866+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:10.969996+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:11.970160+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:12.970390+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:13.970505+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:14.970633+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:15.970764+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:16.970876+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:17.971306+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:18.971446+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:19.971568+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:20.971705+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:21.971889+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:22.972078+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:23.972367+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread fragmentation_score=0.000122 took=0.000013s
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:24.972482+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:25.972601+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:26.972730+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:27.972881+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:28.973037+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:29.973228+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:30.973412+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:31.973649+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:32.973775+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:33.973883+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:34.974005+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:35.974137+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:36.974267+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:37.974501+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:38.974664+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:39.974903+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:40.975085+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:41.975306+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:42.975432+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:43.975540+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:44.975722+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:45.975880+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:46.976033+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:47.976261+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:48.976399+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:49.976592+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:50.976759+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:51.976974+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:52.977203+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:53.977511+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:54.977756+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:55.977939+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:56.978108+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:57.978283+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:58.978479+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:59.978653+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:00.978817+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:01.978997+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:02.979115+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 729088 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:03.979224+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 729088 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:04.979428+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 729088 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 971.637207031s of 971.653137207s, submitted: 8
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:05.979594+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 516099 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 548864 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:06.980107+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 548864 heap: 72720384 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:07.980232+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 9805824 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:08.980398+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 9805824 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:09.980521+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 67 ms_handle_reset con 0x55691f315400 session 0x55692001b880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 9789440 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:10.980686+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 547035 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 9789440 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:11.980834+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 17915904 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 67 heartbeat osd_stat(store_statfs(0x4fdca1000/0x0/0x4ffc00000, data 0x4bcb05/0x529000, compress 0x0/0x0/0x0, omap 0x903e, meta 0x1a26fc2), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:12.981049+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 ms_handle_reset con 0x556922501000 session 0x55691ffdca80
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:13.981247+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:14.981453+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:15.981686+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:16.982609+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:17.982763+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:18.982893+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:19.983070+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:20.983274+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:21.983494+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:22.983754+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:23.983879+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:24.984021+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:25.984167+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:26.984673+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:27.984816+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:28.985026+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:29.985226+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:30.985382+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:31.985660+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:32.985802+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:33.985973+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:34.986247+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:35.986390+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:36.986543+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:37.986687+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:38.986827+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:39.986955+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:40.987147+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:41.987531+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:42.987701+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:43.987890+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:44.988017+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:45.988230+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feab800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.277194977s of 40.485038757s, submitted: 37
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 17711104 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 ms_handle_reset con 0x55691feab800 session 0x556922036380
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:46.988428+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02e000/0x0/0x4ffc00000, data 0x112e52d/0x119e000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 68 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 17719296 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:47.988662+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 17719296 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:48.989329+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 17170432 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:49.989472+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 25313280 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feaa800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:50.989599+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 70 ms_handle_reset con 0x55691feabc00 session 0x5569220eb6c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 70 ms_handle_reset con 0x55691feaa800 session 0x556920825880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878771 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 25419776 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:51.989773+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 71 ms_handle_reset con 0x55691f315400 session 0x5569220ea1c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feab800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 25403392 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:52.989952+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 71 ms_handle_reset con 0x556921a32c00 session 0x55691ffdd6c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 72 ms_handle_reset con 0x55691feab800 session 0x556922069a40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fc81d000/0x0/0x4ffc00000, data 0x1133d09/0x11aa000, compress 0x0/0x0/0x0, omap 0x8cd0, meta 0x1a27330), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55692243e000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 72 ms_handle_reset con 0x55692243e000 session 0x55692196fa40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 25264128 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:53.990148+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a38c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 25133056 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:54.990340+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 73 ms_handle_reset con 0x556921a38c00 session 0x55692196f340
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 25542656 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:55.990819+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 74 ms_handle_reset con 0x55691f315400 session 0x55691fc48000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 647567 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 25542656 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 74 heartbeat osd_stat(store_statfs(0x4fd019000/0x0/0x4ffc00000, data 0x113655b/0x11af000, compress 0x0/0x0/0x0, omap 0x883e, meta 0x1a277c2), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:56.991108+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050306320s of 10.585802078s, submitted: 110
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 75 ms_handle_reset con 0x556921a32c00 session 0x55691ffdc000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 25542656 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:57.991763+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a38c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 25534464 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:58.992400+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 76 ms_handle_reset con 0x556921a38c00 session 0x55691fea4700
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 25534464 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55692243e000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:59.992568+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 77 ms_handle_reset con 0x55692243e000 session 0x55691fc49340
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 25354240 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:00.992889+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663303 data_alloc: 218103808 data_used: 0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 25354240 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:01.993042+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd00d000/0x0/0x4ffc00000, data 0x113a791/0x11ba000, compress 0x0/0x0/0x0, omap 0x8e4d, meta 0x1a271b3), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 25354240 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:02.993238+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 78 ms_handle_reset con 0x556923da7c00 session 0x5569220696c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 25174016 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:03.993395+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 25157632 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:04.993514+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fd00d000/0x0/0x4ffc00000, data 0x113bc5d/0x11bd000, compress 0x0/0x0/0x0, omap 0x8ebe, meta 0x1a27142), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x556923da7c00 session 0x556922037dc0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x556921a32c00 session 0x55691ff448c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x55691f315400 session 0x556922036c40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x556923da7800 session 0x5569220361c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 25100288 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:05.993722+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679404 data_alloc: 218103808 data_used: 51
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 25100288 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 81 heartbeat osd_stat(store_statfs(0x4fcffd000/0x0/0x4ffc00000, data 0x1140052/0x11c9000, compress 0x0/0x0/0x0, omap 0x9003, meta 0x1a26ffd), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:06.993873+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.062306404s of 10.219599724s, submitted: 87
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 82 ms_handle_reset con 0x556923da7400 session 0x55692196fa40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 24027136 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:07.993994+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 83 ms_handle_reset con 0x556923da7c00 session 0x55692196f880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:08.994214+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 83 ms_handle_reset con 0x556923da7800 session 0x5569220b1880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:09.994474+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fcff7000/0x0/0x4ffc00000, data 0x1142cab/0x11d1000, compress 0x0/0x0/0x0, omap 0x8901, meta 0x1a276ff), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:10.994703+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 85 ms_handle_reset con 0x556923da7400 session 0x5569220b1dc0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695985 data_alloc: 218103808 data_used: 51
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:11.994896+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fcff5000/0x0/0x4ffc00000, data 0x11442a4/0x11d5000, compress 0x0/0x0/0x0, omap 0x8942, meta 0x1a276be), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:12.995122+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 86 ms_handle_reset con 0x55691f315400 session 0x55692196fc00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 23961600 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:13.995268+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 86 ms_handle_reset con 0x556921a32c00 session 0x5569220b16c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 23937024 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:14.995428+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 86 ms_handle_reset con 0x556923da7400 session 0x55692196ee00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 87 ms_handle_reset con 0x55691f315400 session 0x556920824c40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 23879680 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:15.995564+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fcfef000/0x0/0x4ffc00000, data 0x1146d3d/0x11db000, compress 0x0/0x0/0x0, omap 0x8aa1, meta 0x1a2755f), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 88 ms_handle_reset con 0x556923da7800 session 0x556922068380
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 88 ms_handle_reset con 0x556923da7c00 session 0x55691fc49880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 706356 data_alloc: 218103808 data_used: 51
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 23289856 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:16.995698+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.869216919s of 10.003292084s, submitted: 89
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 89 ms_handle_reset con 0x556923dc0000 session 0x55691fc48380
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 23175168 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:17.995845+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 23166976 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:18.995998+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fbe45000/0x0/0x4ffc00000, data 0x114b297/0x11e3000, compress 0x0/0x0/0x0, omap 0x801b, meta 0x2bc7fe5), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 90 ms_handle_reset con 0x55691f315400 session 0x55692208c8c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 22298624 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:19.996160+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 22249472 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:20.996346+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 91 ms_handle_reset con 0x556923da7800 session 0x556920784380
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713425 data_alloc: 218103808 data_used: 133
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 22257664 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 91 ms_handle_reset con 0x556923da7400 session 0x556920784fc0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:21.996522+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 22102016 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:22.996659+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 92 ms_handle_reset con 0x556923da7c00 session 0x55691fc49500
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fbe44000/0x0/0x4ffc00000, data 0x114e056/0x11e8000, compress 0x0/0x0/0x0, omap 0x7b07, meta 0x2bc84f9), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 92 ms_handle_reset con 0x556923dc0c00 session 0x55691fc49a40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 20914176 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:23.996814+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 93 ms_handle_reset con 0x55691f315400 session 0x55692196ec40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 20889600 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:24.997080+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 93 ms_handle_reset con 0x556923da7400 session 0x55691fc48000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 20873216 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:25.997279+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 94 ms_handle_reset con 0x556923da7800 session 0x55691fc496c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 721609 data_alloc: 218103808 data_used: 4178
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 20799488 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:26.997427+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.220775604s of 10.000844002s, submitted: 160
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 95 ms_handle_reset con 0x556923da7c00 session 0x5569220eac40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 20676608 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feab800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:27.997578+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 96 ms_handle_reset con 0x55691feab800 session 0x55692001b880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:28.997709+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0x1155203/0x11f6000, compress 0x0/0x0/0x0, omap 0x15410, meta 0x2bbabf0), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:29.997850+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:30.997990+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 733078 data_alloc: 218103808 data_used: 4178
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x55691f315400 session 0x556922069a40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7800 session 0x5569220b1340
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7400 session 0x55692208cfc0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7c00 session 0x5569220b1c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feaa800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x55691feaa800 session 0x55691ff636c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:31.998142+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x55691f315400 session 0x5569220eb500
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7400 session 0x5569220b1880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0x11566fb/0x11f9000, compress 0x0/0x0/0x0, omap 0x15697, meta 0x2bba969), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7800 session 0x55691ff44a80
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 20283392 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:32.998302+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 98 ms_handle_reset con 0x556923da7c00 session 0x55692208dc00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 98 ms_handle_reset con 0x55691feabc00 session 0x556920784e00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 98 ms_handle_reset con 0x55691f315400 session 0x55692208c1c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 20373504 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:33.998454+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 20348928 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:34.998620+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 20348928 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:35.998731+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737260 data_alloc: 218103808 data_used: 4690
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0x1157c0f/0x11fd000, compress 0x0/0x0/0x0, omap 0x15a06, meta 0x2bba5fa), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 20332544 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:36.998852+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 20332544 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:37.999002+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.225811958s of 11.280440331s, submitted: 55
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 99 ms_handle_reset con 0x556922501800 session 0x556922069180
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 20275200 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:38.999132+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 100 ms_handle_reset con 0x556922501400 session 0x556920784700
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 100 ms_handle_reset con 0x556922501000 session 0x55691ff62540
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc3c00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 19996672 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 100 ms_handle_reset con 0x556923dc3c00 session 0x556920824380
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:39.999388+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 101 ms_handle_reset con 0x55691f315400 session 0x55691ff63880
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:40.999550+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fbe24000/0x0/0x4ffc00000, data 0x115bcad/0x1206000, compress 0x0/0x0/0x0, omap 0x16214, meta 0x2bb9dec), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745564 data_alloc: 218103808 data_used: 5202
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:41.999725+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:42.999849+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:44.000000+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 102 ms_handle_reset con 0x556922501000 session 0x5569220a01c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 19857408 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fbe24000/0x0/0x4ffc00000, data 0x115bcad/0x1206000, compress 0x0/0x0/0x0, omap 0x16214, meta 0x2bb9dec), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:45.000139+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 19865600 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fbe21000/0x0/0x4ffc00000, data 0x115d296/0x1209000, compress 0x0/0x0/0x0, omap 0x16649, meta 0x2bb99b7), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:46.000260+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 103 ms_handle_reset con 0x556922501400 session 0x556922069500
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751112 data_alloc: 218103808 data_used: 5202
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 19824640 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:47.000390+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 103 ms_handle_reset con 0x556923da7800 session 0x556922036700
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 103 ms_handle_reset con 0x556923da7c00 session 0x5569220b0e00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fbe1e000/0x0/0x4ffc00000, data 0x115e8d3/0x120c000, compress 0x0/0x0/0x0, omap 0x16a68, meta 0x2bb9598), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 19824640 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:48.007385+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 19808256 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:49.007522+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.630190849s of 10.746520996s, submitted: 77
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 105 ms_handle_reset con 0x55691feabc00 session 0x5569208248c0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe19000/0x0/0x4ffc00000, data 0x115fd9f/0x120f000, compress 0x0/0x0/0x0, omap 0x16d54, meta 0x2bb92ac), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:50.007688+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:51.007932+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 756035 data_alloc: 218103808 data_used: 5202
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:52.008307+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:53.008449+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:54.008601+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 105 ms_handle_reset con 0x556923dc0400 session 0x55692208c000
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 105 ms_handle_reset con 0x556923dc0800 session 0x55691fc2fc00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x5569208a2800
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 106 ms_handle_reset con 0x5569208a2800 session 0x55692211b340
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:55.008793+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0x116286c/0x1213000, compress 0x0/0x0/0x0, omap 0x172b8, meta 0x2bb8d48), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691fee1400
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 106 ms_handle_reset con 0x55691fee1400 session 0x55691fea4700
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:56.008950+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757552 data_alloc: 218103808 data_used: 5151
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 19931136 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:57.009073+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 107 ms_handle_reset con 0x55691feabc00 session 0x55691fc48c40
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 107 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0x1163e28/0x1213000, compress 0x0/0x0/0x0, omap 0x176ab, meta 0x2bb8955), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 19914752 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:58.009241+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0x1163e28/0x1213000, compress 0x0/0x0/0x0, omap 0x176ab, meta 0x2bb8955), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:59.009393+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:00.009578+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0x11652f4/0x1216000, compress 0x0/0x0/0x0, omap 0x1799e, meta 0x2bb8662), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:01.009771+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763008 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:02.010007+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0x11652f4/0x1216000, compress 0x0/0x0/0x0, omap 0x1799e, meta 0x2bb8662), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:03.010233+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19898368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:04.010360+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0x11652f4/0x1216000, compress 0x0/0x0/0x0, omap 0x1799e, meta 0x2bb8662), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.974781036s of 15.065981865s, submitted: 76
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:05.010490+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:06.010654+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:07.010737+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:08.010900+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:09.010997+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:10.011142+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:11.011329+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:12.011527+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:13.011708+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:14.011807+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:15.011942+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:16.012088+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:17.012264+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:18.012447+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:19.012622+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:20.012811+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:21.013084+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:22.013397+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:23.013542+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:24.013940+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:25.014073+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:26.014235+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:27.014361+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:28.014502+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:29.014659+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:30.014824+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:31.014995+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:32.015523+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:33.015670+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:34.015864+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:35.016013+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:36.016216+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:37.016429+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:38.016643+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:39.016831+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:40.017162+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:41.017405+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:42.017753+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:43.017946+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:44.018120+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:45.018267+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:46.018469+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:47.018622+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:48.018760+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:49.018946+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:50.019096+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:51.019244+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:52.019427+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:53.019566+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:54.019823+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:55.019956+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:56.020094+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:57.020304+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:58.020508+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:59.020674+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:00.020809+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:01.020982+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:02.021250+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:03.021450+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:04.021609+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:05.021803+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:06.021946+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:07.022079+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:08.022247+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:09.022443+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:10.022688+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:11.022868+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:12.023171+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:13.023399+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:14.023592+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:15.023742+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:16.023908+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:17.024081+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:18.024272+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:19.024409+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:20.024591+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:21.024760+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:22.024988+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:23.025144+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:24.025323+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:25.025508+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:26.025660+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:27.025828+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:28.026001+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:29.026148+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:30.026507+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:31.027129+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:32.027425+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 20:56:06 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 20:56:06 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 19873792 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'config diff' '{prefix=config diff}'
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:33.027596+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'config show' '{prefix=config show}'
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 19333120 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:34.027731+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 19537920 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 20:56:06 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:35.027887+0000)
Dec 01 20:56:06 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 20:56:06 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 19537920 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 20:56:06 compute-0 ceph-osd[86634]: do_command 'log dump' '{prefix=log dump}'
Dec 01 20:56:06 compute-0 ceph-mon[75880]: from='client.14822 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:06 compute-0 ceph-mon[75880]: pgmap v836: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:06 compute-0 ceph-mon[75880]: from='client.14824 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:06 compute-0 ceph-mon[75880]: from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:06 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 01 20:56:06 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547768502' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 01 20:56:06 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 01 20:56:07 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633164877' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 01 20:56:07 compute-0 ceph-mon[75880]: from='client.14828 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:07 compute-0 ceph-mon[75880]: from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:07 compute-0 ceph-mon[75880]: from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:07 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1547768502' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 01 20:56:07 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1633164877' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 01 20:56:07 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 01 20:56:07 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383956741' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 20:56:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 01 20:56:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013454061' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 01 20:56:08 compute-0 ceph-mon[75880]: from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:08 compute-0 ceph-mon[75880]: pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:08 compute-0 ceph-mon[75880]: from='client.14842 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:08 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3383956741' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 20:56:08 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/4013454061' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 01 20:56:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 01 20:56:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 01 20:56:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 01 20:56:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254921829' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 01 20:56:08 compute-0 systemd[1]: Starting Hostname Service...
Dec 01 20:56:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:09 compute-0 systemd[1]: Started Hostname Service.
Dec 01 20:56:09 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14854 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:09 compute-0 ceph-mon[75880]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 01 20:56:09 compute-0 ceph-mon[75880]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 01 20:56:09 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1254921829' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 01 20:56:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 01 20:56:09 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883135120' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 01 20:56:10 compute-0 ceph-mon[75880]: pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:10 compute-0 ceph-mon[75880]: from='client.14854 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:10 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2883135120' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 01 20:56:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 01 20:56:10 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730672072' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 01 20:56:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 01 20:56:10 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466749427' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 01 20:56:11 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3730672072' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 01 20:56:11 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3466749427' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 01 20:56:11 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 01 20:56:11 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584166443' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 01 20:56:12 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14864 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:12 compute-0 ceph-mon[75880]: pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:12 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2584166443' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 01 20:56:12 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 01 20:56:12 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737370466' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 01 20:56:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 01 20:56:13 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099741284' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 01 20:56:13 compute-0 ceph-mon[75880]: from='client.14864 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:13 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3737370466' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 01 20:56:13 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1099741284' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 01 20:56:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:13 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14870 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:14 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 01 20:56:14 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253598410' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 01 20:56:14 compute-0 ceph-mon[75880]: pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:14 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2253598410' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 01 20:56:14 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14874 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v841: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:14 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:15 compute-0 ceph-mon[75880]: from='client.14870 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 01 20:56:15 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415512472' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 01 20:56:15 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 01 20:56:15 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154465790' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 01 20:56:16 compute-0 ovs-appctl[255171]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 01 20:56:16 compute-0 ovs-appctl[255181]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 01 20:56:16 compute-0 ovs-appctl[255187]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14882 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:16 compute-0 ceph-mon[75880]: from='client.14874 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:16 compute-0 ceph-mon[75880]: pgmap v841: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:16 compute-0 ceph-mon[75880]: from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:16 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/415512472' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Dec 01 20:56:16 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3154465790' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14884 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:56:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec 01 20:56:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174365097' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 01 20:56:17 compute-0 ceph-mon[75880]: from='client.14882 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:17 compute-0 ceph-mon[75880]: from='client.14884 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:17 compute-0 ceph-mon[75880]: pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:17 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/4174365097' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Dec 01 20:56:17 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec 01 20:56:17 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263093299' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Dec 01 20:56:18 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14890 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:18 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1263093299' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Dec 01 20:56:18 compute-0 ceph-mon[75880]: from='client.14890 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:18 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14892 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 01 20:56:19 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067302018' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:56:19 compute-0 ceph-mon[75880]: from='client.14892 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 20:56:19 compute-0 ceph-mon[75880]: pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:19 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3067302018' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 20:56:19 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec 01 20:56:19 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316426621' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Dec 01 20:56:20 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec 01 20:56:20 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/354653305' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:20 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/316426621' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Dec 01 20:56:20 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/354653305' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:20 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14900 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec 01 20:56:21 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867249871' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 20:56:21 compute-0 ceph-mon[75880]: from='client.14900 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:21 compute-0 ceph-mon[75880]: pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:21 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/867249871' entity='client.admin' cmd={"prefix": "df", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 20:56:21 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec 01 20:56:21 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401681325' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Dec 01 20:56:22 compute-0 sudo[256418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:56:22 compute-0 sudo[256418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:22 compute-0 sudo[256418]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:22 compute-0 sudo[256443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:56:22 compute-0 sudo[256443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150465733' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2401681325' entity='client.admin' cmd={"prefix": "df", "format": "json-pretty"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3150465733' entity='client.admin' cmd={"prefix": "fs dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:22 compute-0 sudo[256443]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876446501' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:56:22 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:56:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:22 compute-0 sudo[256543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:56:22 compute-0 sudo[256543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:22 compute-0 sudo[256543]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:23 compute-0 sudo[256570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:56:23 compute-0 sudo[256570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:23 compute-0 podman[256632]: 2025-12-01 20:56:23.285588479 +0000 UTC m=+0.047111375 container create ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:23 compute-0 systemd[1]: Started libpod-conmon-ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c.scope.
Dec 01 20:56:23 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:56:23 compute-0 podman[256632]: 2025-12-01 20:56:23.263956402 +0000 UTC m=+0.025479338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:56:23 compute-0 podman[256632]: 2025-12-01 20:56:23.374940074 +0000 UTC m=+0.136462990 container init ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:56:23 compute-0 podman[256632]: 2025-12-01 20:56:23.382689198 +0000 UTC m=+0.144212094 container start ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:56:23 compute-0 podman[256632]: 2025-12-01 20:56:23.387979783 +0000 UTC m=+0.149502679 container attach ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:56:23 compute-0 bold_dhawan[256648]: 167 167
Dec 01 20:56:23 compute-0 systemd[1]: libpod-ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c.scope: Deactivated successfully.
Dec 01 20:56:23 compute-0 conmon[256648]: conmon ca36f6a30bb5620c45cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c.scope/container/memory.events
Dec 01 20:56:23 compute-0 podman[256632]: 2025-12-01 20:56:23.395031723 +0000 UTC m=+0.156554659 container died ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:56:23 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14910 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f22717f575f312387321156015ffd38f97b7ece049f44bf867b717b1a51b1e20-merged.mount: Deactivated successfully.
Dec 01 20:56:23 compute-0 podman[256632]: 2025-12-01 20:56:23.44892856 +0000 UTC m=+0.210451456 container remove ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:56:23 compute-0 systemd[1]: libpod-conmon-ca36f6a30bb5620c45cbbcbaeba36ff79e88a99ac3dbcbb9cc1603483b8abe6c.scope: Deactivated successfully.
Dec 01 20:56:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2876446501' entity='client.admin' cmd={"prefix": "fs ls", "format": "json-pretty"} : dispatch
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:56:23 compute-0 ceph-mon[75880]: pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:23 compute-0 ceph-mon[75880]: from='client.14910 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:23 compute-0 podman[256701]: 2025-12-01 20:56:23.638461261 +0000 UTC m=+0.049574742 container create 2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:56:23 compute-0 systemd[1]: Started libpod-conmon-2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c.scope.
Dec 01 20:56:23 compute-0 podman[256701]: 2025-12-01 20:56:23.618686462 +0000 UTC m=+0.029799983 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:56:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec 01 20:56:23 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489970148' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Dec 01 20:56:24 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547185e4d106dabd8e0b51fb4e288e4c7ba1b42c7bbe54353675c8254caaa6fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547185e4d106dabd8e0b51fb4e288e4c7ba1b42c7bbe54353675c8254caaa6fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547185e4d106dabd8e0b51fb4e288e4c7ba1b42c7bbe54353675c8254caaa6fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547185e4d106dabd8e0b51fb4e288e4c7ba1b42c7bbe54353675c8254caaa6fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547185e4d106dabd8e0b51fb4e288e4c7ba1b42c7bbe54353675c8254caaa6fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:24 compute-0 podman[256701]: 2025-12-01 20:56:24.556000583 +0000 UTC m=+0.967114084 container init 2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:24 compute-0 podman[256701]: 2025-12-01 20:56:24.565818019 +0000 UTC m=+0.976931500 container start 2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 20:56:24 compute-0 podman[256701]: 2025-12-01 20:56:24.569885577 +0000 UTC m=+0.980999078 container attach 2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:24 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec 01 20:56:24 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670376857' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:24 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3489970148' entity='client.admin' cmd={"prefix": "mds stat", "format": "json-pretty"} : dispatch
Dec 01 20:56:24 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/670376857' entity='client.admin' cmd={"prefix": "mon dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:25 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14916 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:25 compute-0 quizzical_noether[256722]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:56:25 compute-0 quizzical_noether[256722]: --> All data devices are unavailable
Dec 01 20:56:25 compute-0 systemd[1]: libpod-2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c.scope: Deactivated successfully.
Dec 01 20:56:25 compute-0 podman[256701]: 2025-12-01 20:56:25.125526793 +0000 UTC m=+1.536640284 container died 2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-547185e4d106dabd8e0b51fb4e288e4c7ba1b42c7bbe54353675c8254caaa6fc-merged.mount: Deactivated successfully.
Dec 01 20:56:25 compute-0 podman[256701]: 2025-12-01 20:56:25.167100494 +0000 UTC m=+1.578213995 container remove 2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:56:25 compute-0 systemd[1]: libpod-conmon-2dd2b04db23ff38ba1e32ae3b079dd7e2d0c68902738050e41b04d372d7ae58c.scope: Deactivated successfully.
Dec 01 20:56:25 compute-0 sudo[256570]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:25 compute-0 sudo[256845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:56:25 compute-0 sudo[256845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:25 compute-0 sudo[256845]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:25 compute-0 sudo[256870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:56:25 compute-0 sudo[256870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:25 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec 01 20:56:25 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574886349' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Dec 01 20:56:25 compute-0 podman[256919]: 2025-12-01 20:56:25.711352995 +0000 UTC m=+0.045870857 container create 0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:56:25 compute-0 systemd[1]: Started libpod-conmon-0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650.scope.
Dec 01 20:56:25 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:56:25 compute-0 podman[256919]: 2025-12-01 20:56:25.690868994 +0000 UTC m=+0.025386856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:56:25 compute-0 podman[256919]: 2025-12-01 20:56:25.801885288 +0000 UTC m=+0.136403150 container init 0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 01 20:56:25 compute-0 podman[256919]: 2025-12-01 20:56:25.812457419 +0000 UTC m=+0.146975271 container start 0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:25 compute-0 podman[256919]: 2025-12-01 20:56:25.816268537 +0000 UTC m=+0.150786379 container attach 0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:56:25 compute-0 stupefied_turing[256952]: 167 167
Dec 01 20:56:25 compute-0 systemd[1]: libpod-0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650.scope: Deactivated successfully.
Dec 01 20:56:25 compute-0 podman[256919]: 2025-12-01 20:56:25.819913012 +0000 UTC m=+0.154430854 container died 0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4ed6a7d7c626a683a61f5e64828d800fc48561158ccb3b85daf669f87872f7-merged.mount: Deactivated successfully.
Dec 01 20:56:25 compute-0 podman[256919]: 2025-12-01 20:56:25.868617456 +0000 UTC m=+0.203135318 container remove 0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_turing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:25 compute-0 systemd[1]: libpod-conmon-0f48484f0692e22d5ca9aa93849214e1a9f55a6ce4c9744aedd64f473331b650.scope: Deactivated successfully.
Dec 01 20:56:25 compute-0 ceph-mon[75880]: pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:25 compute-0 ceph-mon[75880]: from='client.14916 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:25 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3574886349' entity='client.admin' cmd={"prefix": "osd blocklist ls", "format": "json-pretty"} : dispatch
Dec 01 20:56:25 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14920 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:26 compute-0 podman[256991]: 2025-12-01 20:56:26.026410303 +0000 UTC m=+0.036002457 container create 27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:56:26 compute-0 systemd[1]: Started libpod-conmon-27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85.scope.
Dec 01 20:56:26 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:56:26 compute-0 podman[256991]: 2025-12-01 20:56:26.010870207 +0000 UTC m=+0.020462381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33081fa385e6e78c93884eab4dd3140612997f710aa97eec9fad42b0b2b17c2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33081fa385e6e78c93884eab4dd3140612997f710aa97eec9fad42b0b2b17c2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33081fa385e6e78c93884eab4dd3140612997f710aa97eec9fad42b0b2b17c2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33081fa385e6e78c93884eab4dd3140612997f710aa97eec9fad42b0b2b17c2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:26 compute-0 podman[256991]: 2025-12-01 20:56:26.167317103 +0000 UTC m=+0.176909277 container init 27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:56:26 compute-0 podman[256991]: 2025-12-01 20:56:26.174009072 +0000 UTC m=+0.183601226 container start 27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:56:26 compute-0 podman[256991]: 2025-12-01 20:56:26.178458941 +0000 UTC m=+0.188051115 container attach 27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:56:26 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14922 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]: {
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:     "0": [
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:         {
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "devices": [
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "/dev/loop3"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             ],
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_name": "ceph_lv0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_size": "21470642176",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "name": "ceph_lv0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "tags": {
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cluster_name": "ceph",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.crush_device_class": "",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.encrypted": "0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.objectstore": "bluestore",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osd_id": "0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.type": "block",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.vdo": "0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.with_tpm": "0"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             },
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "type": "block",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "vg_name": "ceph_vg0"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:         }
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:     ],
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:     "1": [
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:         {
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "devices": [
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "/dev/loop4"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             ],
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_name": "ceph_lv1",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_size": "21470642176",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "name": "ceph_lv1",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "tags": {
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cluster_name": "ceph",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.crush_device_class": "",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.encrypted": "0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.objectstore": "bluestore",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osd_id": "1",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.type": "block",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.vdo": "0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.with_tpm": "0"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             },
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "type": "block",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "vg_name": "ceph_vg1"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:         }
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:     ],
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:     "2": [
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:         {
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "devices": [
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "/dev/loop5"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             ],
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_name": "ceph_lv2",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_size": "21470642176",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "name": "ceph_lv2",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "tags": {
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.cluster_name": "ceph",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.crush_device_class": "",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.encrypted": "0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.objectstore": "bluestore",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osd_id": "2",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.type": "block",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.vdo": "0",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:                 "ceph.with_tpm": "0"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             },
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "type": "block",
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:             "vg_name": "ceph_vg2"
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:         }
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]:     ]
Dec 01 20:56:26 compute-0 blissful_blackburn[257012]: }
Dec 01 20:56:26 compute-0 systemd[1]: libpod-27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85.scope: Deactivated successfully.
Dec 01 20:56:26 compute-0 podman[256991]: 2025-12-01 20:56:26.496713259 +0000 UTC m=+0.506305413 container died 27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-33081fa385e6e78c93884eab4dd3140612997f710aa97eec9fad42b0b2b17c2c-merged.mount: Deactivated successfully.
Dec 01 20:56:26 compute-0 podman[256991]: 2025-12-01 20:56:26.547888211 +0000 UTC m=+0.557480375 container remove 27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 01 20:56:26 compute-0 systemd[1]: libpod-conmon-27b8d55c6338be99c138ac249ba9553d1d445bca31177fd3243bf51e13444c85.scope: Deactivated successfully.
Dec 01 20:56:26 compute-0 sudo[256870]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:26 compute-0 sudo[257153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:56:26 compute-0 sudo[257153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:26 compute-0 sudo[257153]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:26 compute-0 sudo[257240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:56:26 compute-0 sudo[257240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:26 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec 01 20:56:26 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548000293' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:26 compute-0 ceph-mon[75880]: from='client.14920 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:26 compute-0 ceph-mon[75880]: from='client.14922 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:26 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2548000293' entity='client.admin' cmd={"prefix": "osd dump", "format": "json-pretty"} : dispatch
Dec 01 20:56:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:27 compute-0 podman[257356]: 2025-12-01 20:56:27.109585247 +0000 UTC m=+0.078032003 container create 289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 20:56:27 compute-0 systemd[1]: Started libpod-conmon-289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969.scope.
Dec 01 20:56:27 compute-0 podman[257356]: 2025-12-01 20:56:27.071765814 +0000 UTC m=+0.040212590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:56:27 compute-0 podman[257365]: 2025-12-01 20:56:27.182814078 +0000 UTC m=+0.114235795 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:56:27 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:56:27 compute-0 podman[257356]: 2025-12-01 20:56:27.234686362 +0000 UTC m=+0.203133138 container init 289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 20:56:27 compute-0 podman[257356]: 2025-12-01 20:56:27.244157538 +0000 UTC m=+0.212604294 container start 289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:56:27 compute-0 podman[257356]: 2025-12-01 20:56:27.247772951 +0000 UTC m=+0.216219717 container attach 289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:56:27 compute-0 hungry_wing[257415]: 167 167
Dec 01 20:56:27 compute-0 systemd[1]: libpod-289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969.scope: Deactivated successfully.
Dec 01 20:56:27 compute-0 podman[257356]: 2025-12-01 20:56:27.253152549 +0000 UTC m=+0.221599335 container died 289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ce6ad5b455a4ba7ae736dd0cf5cbb31f2d65719504e38f6f630af9c356824f0-merged.mount: Deactivated successfully.
Dec 01 20:56:27 compute-0 podman[257356]: 2025-12-01 20:56:27.308682607 +0000 UTC m=+0.277129373 container remove 289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_wing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:27 compute-0 systemd[1]: libpod-conmon-289c8bfb10d6c065dab49b505d4538caf3d29942630598131a09ba7b5b946969.scope: Deactivated successfully.
Dec 01 20:56:27 compute-0 virtqemud[244294]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 01 20:56:27 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec 01 20:56:27 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247463946' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Dec 01 20:56:27 compute-0 podman[257472]: 2025-12-01 20:56:27.525810512 +0000 UTC m=+0.058042717 container create 78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_margulis, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:27 compute-0 podman[257472]: 2025-12-01 20:56:27.496555276 +0000 UTC m=+0.028787511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:56:27 compute-0 systemd[1]: Started libpod-conmon-78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13.scope.
Dec 01 20:56:27 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93e9b079f51b744c2692aef16bc4671d69ed80cd28d5a466cca35d1b96af58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93e9b079f51b744c2692aef16bc4671d69ed80cd28d5a466cca35d1b96af58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93e9b079f51b744c2692aef16bc4671d69ed80cd28d5a466cca35d1b96af58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc93e9b079f51b744c2692aef16bc4671d69ed80cd28d5a466cca35d1b96af58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:56:27 compute-0 podman[257472]: 2025-12-01 20:56:27.660704823 +0000 UTC m=+0.192937048 container init 78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_margulis, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:27 compute-0 podman[257472]: 2025-12-01 20:56:27.673768491 +0000 UTC m=+0.206000716 container start 78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 20:56:27 compute-0 podman[257472]: 2025-12-01 20:56:27.678015484 +0000 UTC m=+0.210247689 container attach 78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_margulis, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Dec 01 20:56:27 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14928 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:27 compute-0 ceph-mon[75880]: pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:27 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2247463946' entity='client.admin' cmd={"prefix": "osd numa-status", "format": "json-pretty"} : dispatch
Dec 01 20:56:28 compute-0 systemd[1]: Starting Time & Date Service...
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14930 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:56:28 compute-0 systemd[1]: Started Time & Date Service.
Dec 01 20:56:28 compute-0 lvm[257696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:56:28 compute-0 lvm[257698]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:56:28 compute-0 lvm[257696]: VG ceph_vg0 finished
Dec 01 20:56:28 compute-0 lvm[257698]: VG ceph_vg1 finished
Dec 01 20:56:28 compute-0 lvm[257709]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:56:28 compute-0 lvm[257709]: VG ceph_vg2 finished
Dec 01 20:56:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:28 compute-0 cool_margulis[257514]: {}
Dec 01 20:56:28 compute-0 systemd[1]: libpod-78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13.scope: Deactivated successfully.
Dec 01 20:56:28 compute-0 systemd[1]: libpod-78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13.scope: Consumed 1.402s CPU time.
Dec 01 20:56:28 compute-0 podman[257472]: 2025-12-01 20:56:28.560758196 +0000 UTC m=+1.092990391 container died 78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc93e9b079f51b744c2692aef16bc4671d69ed80cd28d5a466cca35d1b96af58-merged.mount: Deactivated successfully.
Dec 01 20:56:28 compute-0 podman[257472]: 2025-12-01 20:56:28.626727951 +0000 UTC m=+1.158960146 container remove 78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 01 20:56:28 compute-0 systemd[1]: libpod-conmon-78741c5472b10a2680c1791c785fd015409f73ee271ac15b8afbf3b654a41d13.scope: Deactivated successfully.
Dec 01 20:56:28 compute-0 sudo[257240]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:56:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:56:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:56:28 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:56:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec 01 20:56:28 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012513758' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 20:56:28 compute-0 sudo[257743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:56:28 compute-0 sudo[257743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:56:28 compute-0 sudo[257743]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:28 compute-0 ceph-mon[75880]: from='client.14928 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:28 compute-0 ceph-mon[75880]: from='client.14930 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:28 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:56:28 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:56:28 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2012513758' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 20:56:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Dec 01 20:56:29 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3944708769' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Dec 01 20:56:29 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14936 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:29 compute-0 ceph-mon[75880]: pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:29 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3944708769' entity='client.admin' cmd={"prefix": "osd stat", "format": "json-pretty"} : dispatch
Dec 01 20:56:30 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.14938 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:30 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 01 20:56:30 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111221103' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:56:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:31 compute-0 podman[257947]: 2025-12-01 20:56:31.121791035 +0000 UTC m=+0.079953063 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:56:31 compute-0 podman[257948]: 2025-12-01 20:56:31.188882134 +0000 UTC m=+0.140795496 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 01 20:56:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec 01 20:56:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302738879' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Dec 01 20:56:31 compute-0 ceph-mon[75880]: from='client.14936 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:31 compute-0 ceph-mon[75880]: from='client.14938 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 20:56:31 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1111221103' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 01 20:56:32 compute-0 ceph-mon[75880]: pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:32 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/302738879' entity='client.admin' cmd={"prefix": "time-sync-status", "format": "json-pretty"} : dispatch
Dec 01 20:56:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:56:32
Dec 01 20:56:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:56:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:56:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['vms', 'backups', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Dec 01 20:56:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:56:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:56:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:56:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:34 compute-0 ceph-mon[75880]: pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:37 compute-0 ceph-mon[75880]: pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:38 compute-0 ceph-mon[75880]: pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:40 compute-0 ceph-mon[75880]: pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:56:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:42 compute-0 ceph-mon[75880]: pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:44 compute-0 ceph-mon[75880]: pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:56:44.360 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:56:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:56:44.360 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:56:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:56:44.360 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:56:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:46 compute-0 ceph-mon[75880]: pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:48 compute-0 ceph-mon[75880]: pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:50 compute-0 ceph-mon[75880]: pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:52 compute-0 ceph-mon[75880]: pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:53 compute-0 ceph-mon[75880]: pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:55 compute-0 sudo[250886]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:55 compute-0 sshd-session[250885]: Received disconnect from 192.168.122.10 port 50098:11: disconnected by user
Dec 01 20:56:55 compute-0 sshd-session[250885]: Disconnected from user zuul 192.168.122.10 port 50098
Dec 01 20:56:55 compute-0 sshd-session[250882]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:56:55 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 01 20:56:55 compute-0 systemd[1]: session-53.scope: Consumed 2min 50.196s CPU time, 625.5M memory peak, read 201.3M from disk, written 60.8M to disk.
Dec 01 20:56:55 compute-0 systemd-logind[796]: Session 53 logged out. Waiting for processes to exit.
Dec 01 20:56:55 compute-0 systemd-logind[796]: Removed session 53.
Dec 01 20:56:55 compute-0 sshd-session[257996]: Accepted publickey for zuul from 192.168.122.10 port 46960 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:56:55 compute-0 systemd-logind[796]: New session 54 of user zuul.
Dec 01 20:56:55 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 01 20:56:55 compute-0 sshd-session[257996]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:56:55 compute-0 sudo[258000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-01-xpmydtu.tar.xz
Dec 01 20:56:55 compute-0 sudo[258000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:56:55 compute-0 sudo[258000]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:55 compute-0 sshd-session[257999]: Received disconnect from 192.168.122.10 port 46960:11: disconnected by user
Dec 01 20:56:55 compute-0 sshd-session[257999]: Disconnected from user zuul 192.168.122.10 port 46960
Dec 01 20:56:55 compute-0 sshd-session[257996]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:56:55 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec 01 20:56:55 compute-0 systemd-logind[796]: Session 54 logged out. Waiting for processes to exit.
Dec 01 20:56:55 compute-0 systemd-logind[796]: Removed session 54.
Dec 01 20:56:55 compute-0 sshd-session[258025]: Accepted publickey for zuul from 192.168.122.10 port 46976 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 20:56:55 compute-0 systemd-logind[796]: New session 55 of user zuul.
Dec 01 20:56:55 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 01 20:56:55 compute-0 sshd-session[258025]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 20:56:55 compute-0 sudo[258029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 01 20:56:55 compute-0 sudo[258029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 20:56:55 compute-0 sudo[258029]: pam_unix(sudo:session): session closed for user root
Dec 01 20:56:55 compute-0 sshd-session[258028]: Received disconnect from 192.168.122.10 port 46976:11: disconnected by user
Dec 01 20:56:55 compute-0 sshd-session[258028]: Disconnected from user zuul 192.168.122.10 port 46976
Dec 01 20:56:55 compute-0 sshd-session[258025]: pam_unix(sshd:session): session closed for user zuul
Dec 01 20:56:55 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 01 20:56:55 compute-0 systemd-logind[796]: Session 55 logged out. Waiting for processes to exit.
Dec 01 20:56:55 compute-0 systemd-logind[796]: Removed session 55.
Dec 01 20:56:56 compute-0 ceph-mon[75880]: pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:57 compute-0 nova_compute[244568]: 2025-12-01 20:56:57.492 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:56:57 compute-0 nova_compute[244568]: 2025-12-01 20:56:57.494 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:56:58 compute-0 podman[258054]: 2025-12-01 20:56:58.121801273 +0000 UTC m=+0.075478272 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 01 20:56:58 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 01 20:56:58 compute-0 ceph-mon[75880]: pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:58 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 01 20:56:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:56:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:56:59 compute-0 nova_compute[244568]: 2025-12-01 20:56:59.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:56:59 compute-0 nova_compute[244568]: 2025-12-01 20:56:59.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:00 compute-0 ceph-mon[75880]: pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:00 compute-0 nova_compute[244568]: 2025-12-01 20:57:00.953 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:00 compute-0 nova_compute[244568]: 2025-12-01 20:57:00.976 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:00 compute-0 nova_compute[244568]: 2025-12-01 20:57:00.977 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:00 compute-0 nova_compute[244568]: 2025-12-01 20:57:00.977 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:57:01 compute-0 nova_compute[244568]: 2025-12-01 20:57:01.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:01 compute-0 nova_compute[244568]: 2025-12-01 20:57:01.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:57:01 compute-0 nova_compute[244568]: 2025-12-01 20:57:01.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:57:01 compute-0 nova_compute[244568]: 2025-12-01 20:57:01.979 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:57:01 compute-0 nova_compute[244568]: 2025-12-01 20:57:01.980 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:02 compute-0 podman[258078]: 2025-12-01 20:57:02.082944453 +0000 UTC m=+0.047338582 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:57:02 compute-0 podman[258079]: 2025-12-01 20:57:02.117078872 +0000 UTC m=+0.078767826 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:57:02 compute-0 ceph-mon[75880]: pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:57:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1389112759' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:57:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:57:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1389112759' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:57:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:02 compute-0 nova_compute[244568]: 2025-12-01 20:57:02.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:02 compute-0 nova_compute[244568]: 2025-12-01 20:57:02.991 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:57:02 compute-0 nova_compute[244568]: 2025-12-01 20:57:02.991 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:57:02 compute-0 nova_compute[244568]: 2025-12-01 20:57:02.992 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:57:02 compute-0 nova_compute[244568]: 2025-12-01 20:57:02.992 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:57:02 compute-0 nova_compute[244568]: 2025-12-01 20:57:02.992 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:57:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:57:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:57:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:57:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:57:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:57:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:57:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/1389112759' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:57:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/1389112759' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:57:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:57:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314394182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:57:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.513630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622623513722, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1991, "num_deletes": 254, "total_data_size": 2055837, "memory_usage": 2090472, "flush_reason": "Manual Compaction"}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622623525382, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1317937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15904, "largest_seqno": 17893, "table_properties": {"data_size": 1310925, "index_size": 3638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19319, "raw_average_key_size": 21, "raw_value_size": 1294843, "raw_average_value_size": 1426, "num_data_blocks": 164, "num_entries": 908, "num_filter_entries": 908, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764622449, "oldest_key_time": 1764622449, "file_creation_time": 1764622623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 11798 microseconds, and 5548 cpu microseconds.
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.525446) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1317937 bytes OK
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.525472) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.526914) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.526938) EVENT_LOG_v1 {"time_micros": 1764622623526930, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.526961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2047070, prev total WAL file size 2047070, number of live WAL files 2.
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.526 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.529027) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1287KB)], [38(5821KB)]
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622623529064, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 7278996, "oldest_snapshot_seqno": -1}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4080 keys, 5840152 bytes, temperature: kUnknown
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622623564926, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 5840152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5811500, "index_size": 17342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 96159, "raw_average_key_size": 23, "raw_value_size": 5736961, "raw_average_value_size": 1406, "num_data_blocks": 746, "num_entries": 4080, "num_filter_entries": 4080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.565118) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 5840152 bytes
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.566441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.6 rd, 162.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.7 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(10.0) write-amplify(4.4) OK, records in: 4523, records dropped: 443 output_compression: NoCompression
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.566456) EVENT_LOG_v1 {"time_micros": 1764622623566449, "job": 18, "event": "compaction_finished", "compaction_time_micros": 35923, "compaction_time_cpu_micros": 14345, "output_level": 6, "num_output_files": 1, "total_output_size": 5840152, "num_input_records": 4523, "num_output_records": 4080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622623566742, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622623567839, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.528927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.567918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.567922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.567924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.567926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:03 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:03.567928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.744 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.746 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5108MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.746 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.747 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.832 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.833 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:57:03 compute-0 nova_compute[244568]: 2025-12-01 20:57:03.863 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:57:04 compute-0 ceph-mon[75880]: pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3314394182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:57:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:57:04 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82949048' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:57:04 compute-0 nova_compute[244568]: 2025-12-01 20:57:04.414 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:57:04 compute-0 nova_compute[244568]: 2025-12-01 20:57:04.419 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:57:04 compute-0 nova_compute[244568]: 2025-12-01 20:57:04.437 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:57:04 compute-0 nova_compute[244568]: 2025-12-01 20:57:04.439 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:57:04 compute-0 nova_compute[244568]: 2025-12-01 20:57:04.439 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:57:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:05 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/82949048' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:57:06 compute-0 ceph-mon[75880]: pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:08 compute-0 ceph-mon[75880]: pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:10 compute-0 ceph-mon[75880]: pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:12 compute-0 ceph-mon[75880]: pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:14 compute-0 ceph-mon[75880]: pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:16 compute-0 ceph-mon[75880]: pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:18 compute-0 ceph-mon[75880]: pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:20 compute-0 ceph-mon[75880]: pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:57:20 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 20:57:22 compute-0 ceph-mon[75880]: pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:24 compute-0 ceph-mon[75880]: pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:25 compute-0 ceph-mon[75880]: pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:27 compute-0 ceph-mon[75880]: pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:28 compute-0 sudo[258168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:57:28 compute-0 sudo[258168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:28 compute-0 sudo[258168]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:28 compute-0 sudo[258199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:57:28 compute-0 sudo[258199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:28 compute-0 podman[258192]: 2025-12-01 20:57:28.974479747 +0000 UTC m=+0.075932476 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 01 20:57:29 compute-0 sudo[258199]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:57:29 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:57:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:57:29 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:57:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:57:29 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:57:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:57:29 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:57:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:57:29 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:57:29 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:57:29 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:57:29 compute-0 sudo[258270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:57:29 compute-0 sudo[258270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:29 compute-0 sudo[258270]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:29 compute-0 sudo[258295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:57:29 compute-0 sudo[258295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:30 compute-0 ceph-mon[75880]: pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:30 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:57:30 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:57:30 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:57:30 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:57:30 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:57:30 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:57:30 compute-0 podman[258333]: 2025-12-01 20:57:30.001448974 +0000 UTC m=+0.052778203 container create 2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.012159) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622650012271, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 466, "num_deletes": 251, "total_data_size": 260126, "memory_usage": 268552, "flush_reason": "Manual Compaction"}
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622650017810, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 256432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17894, "largest_seqno": 18359, "table_properties": {"data_size": 253778, "index_size": 690, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6440, "raw_average_key_size": 18, "raw_value_size": 248473, "raw_average_value_size": 728, "num_data_blocks": 32, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764622624, "oldest_key_time": 1764622624, "file_creation_time": 1764622650, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5693 microseconds, and 2482 cpu microseconds.
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.017853) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 256432 bytes OK
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.017874) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.018942) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.018956) EVENT_LOG_v1 {"time_micros": 1764622650018951, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.018977) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 257333, prev total WAL file size 257333, number of live WAL files 2.
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.019672) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(250KB)], [41(5703KB)]
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622650019765, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 6096584, "oldest_snapshot_seqno": -1}
Dec 01 20:57:30 compute-0 systemd[1]: Started libpod-conmon-2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763.scope.
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3910 keys, 4902379 bytes, temperature: kUnknown
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622650053177, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4902379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4876238, "index_size": 15280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 93304, "raw_average_key_size": 23, "raw_value_size": 4805960, "raw_average_value_size": 1229, "num_data_blocks": 650, "num_entries": 3910, "num_filter_entries": 3910, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622650, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.053913) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4902379 bytes
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.056009) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.0 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 5.6 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(42.9) write-amplify(19.1) OK, records in: 4421, records dropped: 511 output_compression: NoCompression
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.056047) EVENT_LOG_v1 {"time_micros": 1764622650056031, "job": 20, "event": "compaction_finished", "compaction_time_micros": 33869, "compaction_time_cpu_micros": 14688, "output_level": 6, "num_output_files": 1, "total_output_size": 4902379, "num_input_records": 4421, "num_output_records": 3910, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622650056482, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622650057456, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.019313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.057550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.057558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.057560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.057562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:30 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:57:30.057563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:57:30 compute-0 podman[258333]: 2025-12-01 20:57:29.977289627 +0000 UTC m=+0.028618946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:57:30 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:57:30 compute-0 podman[258333]: 2025-12-01 20:57:30.105054066 +0000 UTC m=+0.156383315 container init 2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_visvesvaraya, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:57:30 compute-0 podman[258333]: 2025-12-01 20:57:30.115352778 +0000 UTC m=+0.166682007 container start 2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:57:30 compute-0 podman[258333]: 2025-12-01 20:57:30.119930041 +0000 UTC m=+0.171259320 container attach 2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_visvesvaraya, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:57:30 compute-0 happy_visvesvaraya[258349]: 167 167
Dec 01 20:57:30 compute-0 systemd[1]: libpod-2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763.scope: Deactivated successfully.
Dec 01 20:57:30 compute-0 podman[258333]: 2025-12-01 20:57:30.124069071 +0000 UTC m=+0.175398310 container died 2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_visvesvaraya, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 20:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-faf4d9d3302d3a14dbc7214e8530bc78d136ccf71176572f4a5be8f57ffdfabb-merged.mount: Deactivated successfully.
Dec 01 20:57:30 compute-0 podman[258333]: 2025-12-01 20:57:30.168653565 +0000 UTC m=+0.219982794 container remove 2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_visvesvaraya, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:57:30 compute-0 systemd[1]: libpod-conmon-2bfdd448099b199ab53347b95567cf7e1dd6e95f12411c03f51af33d6fc33763.scope: Deactivated successfully.
Dec 01 20:57:30 compute-0 podman[258374]: 2025-12-01 20:57:30.400045425 +0000 UTC m=+0.069013650 container create 93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:57:30 compute-0 systemd[1]: Started libpod-conmon-93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33.scope.
Dec 01 20:57:30 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:57:30 compute-0 podman[258374]: 2025-12-01 20:57:30.379544055 +0000 UTC m=+0.048512300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9302c37d2fd61f5ec11b63744b0e5c74c322333d27b8e650507ee6567d94a5a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9302c37d2fd61f5ec11b63744b0e5c74c322333d27b8e650507ee6567d94a5a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9302c37d2fd61f5ec11b63744b0e5c74c322333d27b8e650507ee6567d94a5a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9302c37d2fd61f5ec11b63744b0e5c74c322333d27b8e650507ee6567d94a5a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9302c37d2fd61f5ec11b63744b0e5c74c322333d27b8e650507ee6567d94a5a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:30 compute-0 podman[258374]: 2025-12-01 20:57:30.492669634 +0000 UTC m=+0.161637869 container init 93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:57:30 compute-0 podman[258374]: 2025-12-01 20:57:30.502951845 +0000 UTC m=+0.171920060 container start 93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 01 20:57:30 compute-0 podman[258374]: 2025-12-01 20:57:30.507652563 +0000 UTC m=+0.176620778 container attach 93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec 01 20:57:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:31 compute-0 zen_noether[258390]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:57:31 compute-0 zen_noether[258390]: --> All data devices are unavailable
Dec 01 20:57:31 compute-0 systemd[1]: libpod-93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33.scope: Deactivated successfully.
Dec 01 20:57:31 compute-0 conmon[258390]: conmon 93cfa74408670ee4dc62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33.scope/container/memory.events
Dec 01 20:57:31 compute-0 podman[258410]: 2025-12-01 20:57:31.17007609 +0000 UTC m=+0.021524424 container died 93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:57:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9302c37d2fd61f5ec11b63744b0e5c74c322333d27b8e650507ee6567d94a5a6-merged.mount: Deactivated successfully.
Dec 01 20:57:31 compute-0 podman[258410]: 2025-12-01 20:57:31.216754752 +0000 UTC m=+0.068203066 container remove 93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 01 20:57:31 compute-0 systemd[1]: libpod-conmon-93cfa74408670ee4dc62973a5b0c7126bbb846e3d6a3bbf3cfe36dbde108df33.scope: Deactivated successfully.
Dec 01 20:57:31 compute-0 sudo[258295]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:31 compute-0 sudo[258425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:57:31 compute-0 sudo[258425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:31 compute-0 sudo[258425]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:31 compute-0 sudo[258450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:57:31 compute-0 sudo[258450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:31 compute-0 podman[258487]: 2025-12-01 20:57:31.733527902 +0000 UTC m=+0.053855546 container create f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:57:31 compute-0 systemd[1]: Started libpod-conmon-f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c.scope.
Dec 01 20:57:31 compute-0 podman[258487]: 2025-12-01 20:57:31.707099835 +0000 UTC m=+0.027427529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:57:31 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:57:31 compute-0 podman[258487]: 2025-12-01 20:57:31.828561246 +0000 UTC m=+0.148888870 container init f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:57:31 compute-0 podman[258487]: 2025-12-01 20:57:31.833612364 +0000 UTC m=+0.153939968 container start f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:57:31 compute-0 podman[258487]: 2025-12-01 20:57:31.837297569 +0000 UTC m=+0.157625183 container attach f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 01 20:57:31 compute-0 inspiring_moser[258504]: 167 167
Dec 01 20:57:31 compute-0 systemd[1]: libpod-f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c.scope: Deactivated successfully.
Dec 01 20:57:31 compute-0 podman[258487]: 2025-12-01 20:57:31.839717685 +0000 UTC m=+0.160045299 container died f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 01 20:57:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-cec4904bae1ebd1de7cf2249025f1fa412012a95bb36b22218062c04fbf950de-merged.mount: Deactivated successfully.
Dec 01 20:57:31 compute-0 podman[258487]: 2025-12-01 20:57:31.87791511 +0000 UTC m=+0.198242724 container remove f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:57:31 compute-0 systemd[1]: libpod-conmon-f15557b266f6ba8aea97d8117fa8321e125ba22f11304b49444177ae288d634c.scope: Deactivated successfully.
Dec 01 20:57:32 compute-0 ceph-mon[75880]: pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:32 compute-0 podman[258528]: 2025-12-01 20:57:32.087389976 +0000 UTC m=+0.069396314 container create 1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:57:32 compute-0 systemd[1]: Started libpod-conmon-1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61.scope.
Dec 01 20:57:32 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2941db19f1b23290fa67ba677efb0d251e3987cf6f5a9ce6e1916ce4854c6b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2941db19f1b23290fa67ba677efb0d251e3987cf6f5a9ce6e1916ce4854c6b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2941db19f1b23290fa67ba677efb0d251e3987cf6f5a9ce6e1916ce4854c6b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2941db19f1b23290fa67ba677efb0d251e3987cf6f5a9ce6e1916ce4854c6b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:32 compute-0 podman[258528]: 2025-12-01 20:57:32.062853587 +0000 UTC m=+0.044860015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:57:32 compute-0 podman[258528]: 2025-12-01 20:57:32.164103266 +0000 UTC m=+0.146109634 container init 1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 20:57:32 compute-0 podman[258528]: 2025-12-01 20:57:32.173646595 +0000 UTC m=+0.155652943 container start 1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 01 20:57:32 compute-0 podman[258528]: 2025-12-01 20:57:32.177304878 +0000 UTC m=+0.159311216 container attach 1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:57:32 compute-0 podman[258545]: 2025-12-01 20:57:32.2109169 +0000 UTC m=+0.067335648 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 01 20:57:32 compute-0 podman[258547]: 2025-12-01 20:57:32.240777035 +0000 UTC m=+0.095834240 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]: {
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:     "0": [
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:         {
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "devices": [
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "/dev/loop3"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             ],
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_name": "ceph_lv0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_size": "21470642176",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "name": "ceph_lv0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "tags": {
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cluster_name": "ceph",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.crush_device_class": "",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.encrypted": "0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.objectstore": "bluestore",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osd_id": "0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.type": "block",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.vdo": "0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.with_tpm": "0"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             },
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "type": "block",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "vg_name": "ceph_vg0"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:         }
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:     ],
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:     "1": [
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:         {
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "devices": [
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "/dev/loop4"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             ],
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_name": "ceph_lv1",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_size": "21470642176",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "name": "ceph_lv1",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "tags": {
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cluster_name": "ceph",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.crush_device_class": "",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.encrypted": "0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.objectstore": "bluestore",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osd_id": "1",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.type": "block",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.vdo": "0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.with_tpm": "0"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             },
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "type": "block",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "vg_name": "ceph_vg1"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:         }
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:     ],
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:     "2": [
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:         {
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "devices": [
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "/dev/loop5"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             ],
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_name": "ceph_lv2",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_size": "21470642176",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "name": "ceph_lv2",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "tags": {
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.cluster_name": "ceph",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.crush_device_class": "",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.encrypted": "0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.objectstore": "bluestore",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osd_id": "2",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.type": "block",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.vdo": "0",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:                 "ceph.with_tpm": "0"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             },
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "type": "block",
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:             "vg_name": "ceph_vg2"
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:         }
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]:     ]
Dec 01 20:57:32 compute-0 stupefied_nobel[258544]: }
Dec 01 20:57:32 compute-0 systemd[1]: libpod-1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61.scope: Deactivated successfully.
Dec 01 20:57:32 compute-0 podman[258528]: 2025-12-01 20:57:32.475928373 +0000 UTC m=+0.457934761 container died 1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:57:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:57:32
Dec 01 20:57:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:57:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:57:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', 'backups', 'cephfs.cephfs.meta']
Dec 01 20:57:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2941db19f1b23290fa67ba677efb0d251e3987cf6f5a9ce6e1916ce4854c6b2-merged.mount: Deactivated successfully.
Dec 01 20:57:32 compute-0 podman[258528]: 2025-12-01 20:57:32.531160671 +0000 UTC m=+0.513167019 container remove 1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_nobel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 01 20:57:32 compute-0 systemd[1]: libpod-conmon-1260f3a0bd3e46bb417a41fc1946b5f5b9ba799e41cde6152adcfac0147bff61.scope: Deactivated successfully.
Dec 01 20:57:32 compute-0 sudo[258450]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:32 compute-0 sudo[258608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:57:32 compute-0 sudo[258608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:32 compute-0 sudo[258608]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:32 compute-0 sudo[258633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:57:32 compute-0 sudo[258633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:33 compute-0 podman[258670]: 2025-12-01 20:57:33.056426007 +0000 UTC m=+0.050194551 container create 02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dhawan, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:57:33 compute-0 systemd[1]: Started libpod-conmon-02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d.scope.
Dec 01 20:57:33 compute-0 podman[258670]: 2025-12-01 20:57:33.037915468 +0000 UTC m=+0.031683992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:57:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:57:33 compute-0 podman[258670]: 2025-12-01 20:57:33.163103405 +0000 UTC m=+0.156871999 container init 02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 20:57:33 compute-0 podman[258670]: 2025-12-01 20:57:33.176813934 +0000 UTC m=+0.170582478 container start 02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:57:33 compute-0 podman[258670]: 2025-12-01 20:57:33.181310406 +0000 UTC m=+0.175078950 container attach 02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dhawan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:57:33 compute-0 naughty_dhawan[258686]: 167 167
Dec 01 20:57:33 compute-0 systemd[1]: libpod-02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d.scope: Deactivated successfully.
Dec 01 20:57:33 compute-0 podman[258670]: 2025-12-01 20:57:33.186158547 +0000 UTC m=+0.179927061 container died 02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:57:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad38ad0d1010ff8691d85770c4c804e43d6e0f0ade1c616de25a3419a074fa9a-merged.mount: Deactivated successfully.
Dec 01 20:57:33 compute-0 podman[258670]: 2025-12-01 20:57:33.239534137 +0000 UTC m=+0.233302681 container remove 02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:57:33 compute-0 systemd[1]: libpod-conmon-02d509e38b54cd9d437a528dd702f21bbfce16bc8a17135c056ee622b20cb92d.scope: Deactivated successfully.
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:57:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:57:33 compute-0 podman[258709]: 2025-12-01 20:57:33.475293905 +0000 UTC m=+0.059196254 container create 051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:57:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:33 compute-0 systemd[1]: Started libpod-conmon-051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05.scope.
Dec 01 20:57:33 compute-0 podman[258709]: 2025-12-01 20:57:33.449777596 +0000 UTC m=+0.033679985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:57:33 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b62dc1394276dc1671465807ea68e596653d828f07df2b2fc42a0b1798256e31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b62dc1394276dc1671465807ea68e596653d828f07df2b2fc42a0b1798256e31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b62dc1394276dc1671465807ea68e596653d828f07df2b2fc42a0b1798256e31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b62dc1394276dc1671465807ea68e596653d828f07df2b2fc42a0b1798256e31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:57:33 compute-0 podman[258709]: 2025-12-01 20:57:33.577766941 +0000 UTC m=+0.161669380 container init 051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:57:33 compute-0 podman[258709]: 2025-12-01 20:57:33.593281657 +0000 UTC m=+0.177184016 container start 051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:57:33 compute-0 podman[258709]: 2025-12-01 20:57:33.596982202 +0000 UTC m=+0.180884631 container attach 051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:57:34 compute-0 ceph-mon[75880]: pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:34 compute-0 lvm[258805]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:57:34 compute-0 lvm[258806]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:57:34 compute-0 lvm[258805]: VG ceph_vg1 finished
Dec 01 20:57:34 compute-0 lvm[258806]: VG ceph_vg2 finished
Dec 01 20:57:34 compute-0 lvm[258802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:57:34 compute-0 lvm[258802]: VG ceph_vg0 finished
Dec 01 20:57:34 compute-0 nice_northcutt[258725]: {}
Dec 01 20:57:34 compute-0 systemd[1]: libpod-051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05.scope: Deactivated successfully.
Dec 01 20:57:34 compute-0 systemd[1]: libpod-051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05.scope: Consumed 1.438s CPU time.
Dec 01 20:57:34 compute-0 podman[258709]: 2025-12-01 20:57:34.515506594 +0000 UTC m=+1.099409013 container died 051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b62dc1394276dc1671465807ea68e596653d828f07df2b2fc42a0b1798256e31-merged.mount: Deactivated successfully.
Dec 01 20:57:34 compute-0 podman[258709]: 2025-12-01 20:57:34.57514202 +0000 UTC m=+1.159044359 container remove 051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 20:57:34 compute-0 systemd[1]: libpod-conmon-051455b1f858cb04234ab289195ff577e5680ae984f0f388d18025b13003ce05.scope: Deactivated successfully.
Dec 01 20:57:34 compute-0 sudo[258633]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:57:34 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:57:34 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:57:34 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:57:34 compute-0 sudo[258821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:57:34 compute-0 sudo[258821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:57:34 compute-0 sudo[258821]: pam_unix(sudo:session): session closed for user root
Dec 01 20:57:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:57:35 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:57:35 compute-0 ceph-mon[75880]: pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:37 compute-0 ceph-mon[75880]: pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:40 compute-0 ceph-mon[75880]: pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:57:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:42 compute-0 ceph-mon[75880]: pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:44 compute-0 ceph-mon[75880]: pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:57:44.361 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:57:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:57:44.363 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:57:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:57:44.363 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:57:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:46 compute-0 ceph-mon[75880]: pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:48 compute-0 ceph-mon[75880]: pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:50 compute-0 ceph-mon[75880]: pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:52 compute-0 ceph-mon[75880]: pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:54 compute-0 ceph-mon[75880]: pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:56 compute-0 ceph-mon[75880]: pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v892: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:57 compute-0 nova_compute[244568]: 2025-12-01 20:57:57.436 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:57 compute-0 nova_compute[244568]: 2025-12-01 20:57:57.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:57:58 compute-0 ceph-mon[75880]: pgmap v892: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:57:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v893: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:57:59 compute-0 podman[258846]: 2025-12-01 20:57:59.115232255 +0000 UTC m=+0.074379868 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 20:57:59 compute-0 ceph-mon[75880]: pgmap v893: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v894: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:00 compute-0 nova_compute[244568]: 2025-12-01 20:58:00.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.956 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.977 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.978 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.978 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.978 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.979 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:01 compute-0 nova_compute[244568]: 2025-12-01 20:58:01.979 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:58:02 compute-0 ceph-mon[75880]: pgmap v894: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:58:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133096393' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:58:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:58:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133096393' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:58:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v895: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:02 compute-0 nova_compute[244568]: 2025-12-01 20:58:02.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:02 compute-0 nova_compute[244568]: 2025-12-01 20:58:02.982 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:58:02 compute-0 nova_compute[244568]: 2025-12-01 20:58:02.983 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:58:02 compute-0 nova_compute[244568]: 2025-12-01 20:58:02.984 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:58:02 compute-0 nova_compute[244568]: 2025-12-01 20:58:02.984 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:58:02 compute-0 nova_compute[244568]: 2025-12-01 20:58:02.984 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:58:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3133096393' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:58:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3133096393' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:58:03 compute-0 podman[258867]: 2025-12-01 20:58:03.151795865 +0000 UTC m=+0.104731298 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:58:03 compute-0 podman[258868]: 2025-12-01 20:58:03.156211324 +0000 UTC m=+0.104665887 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec 01 20:58:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:58:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:58:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:58:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:58:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:58:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:58:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:58:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827190963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.532 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:58:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.733 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.735 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.735 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.736 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.799 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.800 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:58:03 compute-0 nova_compute[244568]: 2025-12-01 20:58:03.813 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:58:04 compute-0 ceph-mon[75880]: pgmap v895: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/827190963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:58:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:58:04 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3563022212' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:58:04 compute-0 nova_compute[244568]: 2025-12-01 20:58:04.392 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:58:04 compute-0 nova_compute[244568]: 2025-12-01 20:58:04.398 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:58:04 compute-0 nova_compute[244568]: 2025-12-01 20:58:04.415 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:58:04 compute-0 nova_compute[244568]: 2025-12-01 20:58:04.417 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:58:04 compute-0 nova_compute[244568]: 2025-12-01 20:58:04.417 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:58:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v896: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:05 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3563022212' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:58:06 compute-0 ceph-mon[75880]: pgmap v896: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v897: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:08 compute-0 ceph-mon[75880]: pgmap v897: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v898: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:10 compute-0 ceph-mon[75880]: pgmap v898: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v899: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:12 compute-0 ceph-mon[75880]: pgmap v899: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v900: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:14 compute-0 ceph-mon[75880]: pgmap v900: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v901: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:16 compute-0 ceph-mon[75880]: pgmap v901: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v902: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:18 compute-0 ceph-mon[75880]: pgmap v902: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v903: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:20 compute-0 ceph-mon[75880]: pgmap v903: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v904: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:22 compute-0 ceph-mon[75880]: pgmap v904: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v905: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:24 compute-0 ceph-mon[75880]: pgmap v905: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v906: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:26 compute-0 ceph-mon[75880]: pgmap v906: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v907: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:28 compute-0 ceph-mon[75880]: pgmap v907: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v908: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:30 compute-0 podman[258954]: 2025-12-01 20:58:30.109003254 +0000 UTC m=+0.073472009 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 01 20:58:30 compute-0 ceph-mon[75880]: pgmap v908: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v909: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:32 compute-0 ceph-mon[75880]: pgmap v909: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:58:32
Dec 01 20:58:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:58:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:58:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', 'images', '.mgr', 'backups', 'cephfs.cephfs.data']
Dec 01 20:58:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:58:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v910: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:58:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:58:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:34 compute-0 podman[258974]: 2025-12-01 20:58:34.099902928 +0000 UTC m=+0.057514970 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 01 20:58:34 compute-0 podman[258975]: 2025-12-01 20:58:34.157953364 +0000 UTC m=+0.107100881 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 01 20:58:34 compute-0 ceph-mon[75880]: pgmap v910: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:34 compute-0 sudo[259018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:58:34 compute-0 sudo[259018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:34 compute-0 sudo[259018]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:34 compute-0 sudo[259043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:58:34 compute-0 sudo[259043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v911: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:35 compute-0 sudo[259043]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:58:35 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:58:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:58:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:58:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:58:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:58:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:58:35 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:58:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:58:35 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:58:35 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:58:35 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:58:35 compute-0 sudo[259099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:58:35 compute-0 sudo[259099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:35 compute-0 sudo[259099]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:35 compute-0 sudo[259124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:58:35 compute-0 sudo[259124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:36 compute-0 podman[259162]: 2025-12-01 20:58:36.217129125 +0000 UTC m=+0.045030580 container create 32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_aryabhata, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:58:36 compute-0 systemd[1]: Started libpod-conmon-32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb.scope.
Dec 01 20:58:36 compute-0 podman[259162]: 2025-12-01 20:58:36.198991597 +0000 UTC m=+0.026893302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:58:36 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:58:36 compute-0 podman[259162]: 2025-12-01 20:58:36.328327603 +0000 UTC m=+0.156229098 container init 32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_aryabhata, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:58:36 compute-0 podman[259162]: 2025-12-01 20:58:36.341638249 +0000 UTC m=+0.169539704 container start 32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:58:36 compute-0 podman[259162]: 2025-12-01 20:58:36.345244072 +0000 UTC m=+0.173145517 container attach 32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_aryabhata, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:58:36 compute-0 practical_aryabhata[259178]: 167 167
Dec 01 20:58:36 compute-0 systemd[1]: libpod-32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb.scope: Deactivated successfully.
Dec 01 20:58:36 compute-0 podman[259162]: 2025-12-01 20:58:36.355029688 +0000 UTC m=+0.182931173 container died 32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0b70fbfd2e16c137943998c2501ba225ad1ad414270ebc360dc2ad1a70cfdc8-merged.mount: Deactivated successfully.
Dec 01 20:58:36 compute-0 podman[259162]: 2025-12-01 20:58:36.404575128 +0000 UTC m=+0.232476583 container remove 32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_aryabhata, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 20:58:36 compute-0 systemd[1]: libpod-conmon-32dc95ee7e56d0b24669450124129017a6ddf4fdf8023551cdf26ce317cec5cb.scope: Deactivated successfully.
Dec 01 20:58:36 compute-0 ceph-mon[75880]: pgmap v911: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:58:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:58:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:58:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:58:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:58:36 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:58:36 compute-0 podman[259202]: 2025-12-01 20:58:36.665617183 +0000 UTC m=+0.071008692 container create 72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:58:36 compute-0 systemd[1]: Started libpod-conmon-72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7.scope.
Dec 01 20:58:36 compute-0 podman[259202]: 2025-12-01 20:58:36.634762068 +0000 UTC m=+0.040153637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:58:36 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2add9aff2b8913e1a7e554dc550aea0ce12de2d8f694e930752bfd3f3f22f8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2add9aff2b8913e1a7e554dc550aea0ce12de2d8f694e930752bfd3f3f22f8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2add9aff2b8913e1a7e554dc550aea0ce12de2d8f694e930752bfd3f3f22f8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2add9aff2b8913e1a7e554dc550aea0ce12de2d8f694e930752bfd3f3f22f8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2add9aff2b8913e1a7e554dc550aea0ce12de2d8f694e930752bfd3f3f22f8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:36 compute-0 podman[259202]: 2025-12-01 20:58:36.775791199 +0000 UTC m=+0.181182748 container init 72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:58:36 compute-0 podman[259202]: 2025-12-01 20:58:36.786022489 +0000 UTC m=+0.191413998 container start 72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 01 20:58:36 compute-0 podman[259202]: 2025-12-01 20:58:36.791282314 +0000 UTC m=+0.196673833 container attach 72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 01 20:58:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v912: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:37 compute-0 hopeful_kilby[259219]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:58:37 compute-0 hopeful_kilby[259219]: --> All data devices are unavailable
Dec 01 20:58:37 compute-0 systemd[1]: libpod-72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7.scope: Deactivated successfully.
Dec 01 20:58:37 compute-0 podman[259202]: 2025-12-01 20:58:37.416919254 +0000 UTC m=+0.822310743 container died 72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2add9aff2b8913e1a7e554dc550aea0ce12de2d8f694e930752bfd3f3f22f8f-merged.mount: Deactivated successfully.
Dec 01 20:58:37 compute-0 podman[259202]: 2025-12-01 20:58:37.471663286 +0000 UTC m=+0.877054765 container remove 72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:58:37 compute-0 systemd[1]: libpod-conmon-72a2658133879607b3e5817a4bce1ee56d34f0844a10d92020cde90353bd48d7.scope: Deactivated successfully.
Dec 01 20:58:37 compute-0 ceph-mon[75880]: pgmap v912: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:37 compute-0 sudo[259124]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:37 compute-0 sudo[259252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:58:37 compute-0 sudo[259252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:37 compute-0 sudo[259252]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:37 compute-0 sudo[259277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:58:37 compute-0 sudo[259277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:38 compute-0 podman[259315]: 2025-12-01 20:58:38.064593504 +0000 UTC m=+0.074979868 container create b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 01 20:58:38 compute-0 systemd[1]: Started libpod-conmon-b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a.scope.
Dec 01 20:58:38 compute-0 podman[259315]: 2025-12-01 20:58:38.03509301 +0000 UTC m=+0.045479384 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:58:38 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:58:38 compute-0 podman[259315]: 2025-12-01 20:58:38.152611006 +0000 UTC m=+0.162997370 container init b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:58:38 compute-0 podman[259315]: 2025-12-01 20:58:38.16165885 +0000 UTC m=+0.172045184 container start b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 20:58:38 compute-0 podman[259315]: 2025-12-01 20:58:38.166597044 +0000 UTC m=+0.176983418 container attach b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:58:38 compute-0 recursing_lederberg[259331]: 167 167
Dec 01 20:58:38 compute-0 systemd[1]: libpod-b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a.scope: Deactivated successfully.
Dec 01 20:58:38 compute-0 podman[259315]: 2025-12-01 20:58:38.171118526 +0000 UTC m=+0.181504860 container died b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 20:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-14aed101880b601be9184369ecc620c2a83ed3e457a9e8155361d1cec31c881e-merged.mount: Deactivated successfully.
Dec 01 20:58:38 compute-0 podman[259315]: 2025-12-01 20:58:38.210536499 +0000 UTC m=+0.220922833 container remove b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 01 20:58:38 compute-0 systemd[1]: libpod-conmon-b2bb6d943b637770681d82d3884ca46aa2b89420dc37365b4aa8c7e17b07529a.scope: Deactivated successfully.
Dec 01 20:58:38 compute-0 podman[259353]: 2025-12-01 20:58:38.405618711 +0000 UTC m=+0.061404802 container create fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_moore, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:58:38 compute-0 systemd[1]: Started libpod-conmon-fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b.scope.
Dec 01 20:58:38 compute-0 podman[259353]: 2025-12-01 20:58:38.374396884 +0000 UTC m=+0.030183045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:58:38 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3c565b3e7fbc97e58148fc8fe8ab7a630212468ae3ec542a293becf9f306f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3c565b3e7fbc97e58148fc8fe8ab7a630212468ae3ec542a293becf9f306f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3c565b3e7fbc97e58148fc8fe8ab7a630212468ae3ec542a293becf9f306f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a3c565b3e7fbc97e58148fc8fe8ab7a630212468ae3ec542a293becf9f306f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:38 compute-0 podman[259353]: 2025-12-01 20:58:38.497763193 +0000 UTC m=+0.153549304 container init fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 01 20:58:38 compute-0 podman[259353]: 2025-12-01 20:58:38.507272201 +0000 UTC m=+0.163058282 container start fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_moore, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:58:38 compute-0 podman[259353]: 2025-12-01 20:58:38.51110313 +0000 UTC m=+0.166889211 container attach fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_moore, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:58:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:38 compute-0 festive_moore[259369]: {
Dec 01 20:58:38 compute-0 festive_moore[259369]:     "0": [
Dec 01 20:58:38 compute-0 festive_moore[259369]:         {
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "devices": [
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "/dev/loop3"
Dec 01 20:58:38 compute-0 festive_moore[259369]:             ],
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_name": "ceph_lv0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_size": "21470642176",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "name": "ceph_lv0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "tags": {
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cluster_name": "ceph",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.crush_device_class": "",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.encrypted": "0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.objectstore": "bluestore",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osd_id": "0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.type": "block",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.vdo": "0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.with_tpm": "0"
Dec 01 20:58:38 compute-0 festive_moore[259369]:             },
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "type": "block",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "vg_name": "ceph_vg0"
Dec 01 20:58:38 compute-0 festive_moore[259369]:         }
Dec 01 20:58:38 compute-0 festive_moore[259369]:     ],
Dec 01 20:58:38 compute-0 festive_moore[259369]:     "1": [
Dec 01 20:58:38 compute-0 festive_moore[259369]:         {
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "devices": [
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "/dev/loop4"
Dec 01 20:58:38 compute-0 festive_moore[259369]:             ],
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_name": "ceph_lv1",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_size": "21470642176",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "name": "ceph_lv1",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "tags": {
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cluster_name": "ceph",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.crush_device_class": "",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.encrypted": "0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.objectstore": "bluestore",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osd_id": "1",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.type": "block",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.vdo": "0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.with_tpm": "0"
Dec 01 20:58:38 compute-0 festive_moore[259369]:             },
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "type": "block",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "vg_name": "ceph_vg1"
Dec 01 20:58:38 compute-0 festive_moore[259369]:         }
Dec 01 20:58:38 compute-0 festive_moore[259369]:     ],
Dec 01 20:58:38 compute-0 festive_moore[259369]:     "2": [
Dec 01 20:58:38 compute-0 festive_moore[259369]:         {
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "devices": [
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "/dev/loop5"
Dec 01 20:58:38 compute-0 festive_moore[259369]:             ],
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_name": "ceph_lv2",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_size": "21470642176",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "name": "ceph_lv2",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "tags": {
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.cluster_name": "ceph",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.crush_device_class": "",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.encrypted": "0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.objectstore": "bluestore",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osd_id": "2",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.type": "block",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.vdo": "0",
Dec 01 20:58:38 compute-0 festive_moore[259369]:                 "ceph.with_tpm": "0"
Dec 01 20:58:38 compute-0 festive_moore[259369]:             },
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "type": "block",
Dec 01 20:58:38 compute-0 festive_moore[259369]:             "vg_name": "ceph_vg2"
Dec 01 20:58:38 compute-0 festive_moore[259369]:         }
Dec 01 20:58:38 compute-0 festive_moore[259369]:     ]
Dec 01 20:58:38 compute-0 festive_moore[259369]: }
Dec 01 20:58:38 compute-0 systemd[1]: libpod-fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b.scope: Deactivated successfully.
Dec 01 20:58:38 compute-0 podman[259353]: 2025-12-01 20:58:38.858955881 +0000 UTC m=+0.514741962 container died fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a3c565b3e7fbc97e58148fc8fe8ab7a630212468ae3ec542a293becf9f306f0-merged.mount: Deactivated successfully.
Dec 01 20:58:38 compute-0 podman[259353]: 2025-12-01 20:58:38.922175228 +0000 UTC m=+0.577961329 container remove fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 01 20:58:38 compute-0 systemd[1]: libpod-conmon-fc0c0f81771874f06b2ce812e233aa71a6526ac1217e68f5c598a727b9b1562b.scope: Deactivated successfully.
Dec 01 20:58:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v913: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:38 compute-0 sudo[259277]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:39 compute-0 sudo[259388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:58:39 compute-0 sudo[259388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:39 compute-0 sudo[259388]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:39 compute-0 sudo[259413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:58:39 compute-0 sudo[259413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:39 compute-0 podman[259450]: 2025-12-01 20:58:39.432294775 +0000 UTC m=+0.053670850 container create 45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 01 20:58:39 compute-0 systemd[1]: Started libpod-conmon-45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80.scope.
Dec 01 20:58:39 compute-0 podman[259450]: 2025-12-01 20:58:39.405016672 +0000 UTC m=+0.026392807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:58:39 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:58:39 compute-0 podman[259450]: 2025-12-01 20:58:39.535148132 +0000 UTC m=+0.156524207 container init 45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_aryabhata, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:58:39 compute-0 podman[259450]: 2025-12-01 20:58:39.54433939 +0000 UTC m=+0.165715455 container start 45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:58:39 compute-0 podman[259450]: 2025-12-01 20:58:39.548820229 +0000 UTC m=+0.170196294 container attach 45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 01 20:58:39 compute-0 funny_aryabhata[259467]: 167 167
Dec 01 20:58:39 compute-0 systemd[1]: libpod-45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80.scope: Deactivated successfully.
Dec 01 20:58:39 compute-0 podman[259450]: 2025-12-01 20:58:39.551098811 +0000 UTC m=+0.172474866 container died 45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2edc5f330ed760065f67e936bb41ad89c531d10a29aa4ec7294cec780b657b59-merged.mount: Deactivated successfully.
Dec 01 20:58:39 compute-0 podman[259450]: 2025-12-01 20:58:39.596654486 +0000 UTC m=+0.218030571 container remove 45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:58:39 compute-0 systemd[1]: libpod-conmon-45b2fc021b3edf11ca046164b407b3332c8ff3f4a64e18c899363a566a2ccb80.scope: Deactivated successfully.
Dec 01 20:58:39 compute-0 podman[259491]: 2025-12-01 20:58:39.815599574 +0000 UTC m=+0.053608058 container create 0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 01 20:58:39 compute-0 systemd[1]: Started libpod-conmon-0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88.scope.
Dec 01 20:58:39 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f42fba4f7ad03bab992df6a020d07bbecc8068755f456b33e977dc5bf7d2cdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:39 compute-0 podman[259491]: 2025-12-01 20:58:39.794907178 +0000 UTC m=+0.032915642 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f42fba4f7ad03bab992df6a020d07bbecc8068755f456b33e977dc5bf7d2cdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f42fba4f7ad03bab992df6a020d07bbecc8068755f456b33e977dc5bf7d2cdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f42fba4f7ad03bab992df6a020d07bbecc8068755f456b33e977dc5bf7d2cdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:58:39 compute-0 podman[259491]: 2025-12-01 20:58:39.900736298 +0000 UTC m=+0.138744762 container init 0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lalande, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:58:39 compute-0 podman[259491]: 2025-12-01 20:58:39.906071365 +0000 UTC m=+0.144079809 container start 0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lalande, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:58:39 compute-0 podman[259491]: 2025-12-01 20:58:39.90942069 +0000 UTC m=+0.147429154 container attach 0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lalande, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 20:58:40 compute-0 ceph-mon[75880]: pgmap v913: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:58:40 compute-0 lvm[259586]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:58:40 compute-0 lvm[259586]: VG ceph_vg1 finished
Dec 01 20:58:40 compute-0 lvm[259584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:58:40 compute-0 lvm[259584]: VG ceph_vg0 finished
Dec 01 20:58:40 compute-0 lvm[259588]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:58:40 compute-0 lvm[259588]: VG ceph_vg2 finished
Dec 01 20:58:40 compute-0 modest_lalande[259507]: {}
Dec 01 20:58:40 compute-0 systemd[1]: libpod-0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88.scope: Deactivated successfully.
Dec 01 20:58:40 compute-0 systemd[1]: libpod-0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88.scope: Consumed 1.364s CPU time.
Dec 01 20:58:40 compute-0 podman[259491]: 2025-12-01 20:58:40.730563204 +0000 UTC m=+0.968571718 container died 0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f42fba4f7ad03bab992df6a020d07bbecc8068755f456b33e977dc5bf7d2cdb-merged.mount: Deactivated successfully.
Dec 01 20:58:40 compute-0 podman[259491]: 2025-12-01 20:58:40.799281444 +0000 UTC m=+1.037289898 container remove 0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_lalande, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 01 20:58:40 compute-0 systemd[1]: libpod-conmon-0a8a5b5c58882ed747c2a58ed65f55ec655c42819154e6dfe364ff2360ec7c88.scope: Deactivated successfully.
Dec 01 20:58:40 compute-0 sudo[259413]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:58:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:58:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:58:40 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:58:40 compute-0 sudo[259602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:58:40 compute-0 sudo[259602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:58:40 compute-0 sudo[259602]: pam_unix(sudo:session): session closed for user root
Dec 01 20:58:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v914: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:58:41 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:58:41 compute-0 ceph-mon[75880]: pgmap v914: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v915: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:44 compute-0 ceph-mon[75880]: pgmap v915: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:58:44.362 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:58:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:58:44.364 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:58:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:58:44.364 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:58:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v916: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:46 compute-0 ceph-mon[75880]: pgmap v916: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v917: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:48 compute-0 ceph-mon[75880]: pgmap v917: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v918: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:50 compute-0 ceph-mon[75880]: pgmap v918: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v919: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:51 compute-0 nova_compute[244568]: 2025-12-01 20:58:51.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:51 compute-0 nova_compute[244568]: 2025-12-01 20:58:51.958 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 01 20:58:52 compute-0 ceph-mon[75880]: pgmap v919: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v920: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.558685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622733558726, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 904, "num_deletes": 255, "total_data_size": 871699, "memory_usage": 889208, "flush_reason": "Manual Compaction"}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622733569322, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 850752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18360, "largest_seqno": 19263, "table_properties": {"data_size": 846233, "index_size": 2171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9282, "raw_average_key_size": 18, "raw_value_size": 837190, "raw_average_value_size": 1661, "num_data_blocks": 99, "num_entries": 504, "num_filter_entries": 504, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764622651, "oldest_key_time": 1764622651, "file_creation_time": 1764622733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10714 microseconds, and 5750 cpu microseconds.
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.569395) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 850752 bytes OK
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.569422) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.570888) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.570905) EVENT_LOG_v1 {"time_micros": 1764622733570899, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.570926) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 867300, prev total WAL file size 867300, number of live WAL files 2.
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.571698) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(830KB)], [44(4787KB)]
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622733571755, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 5753131, "oldest_snapshot_seqno": -1}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 3892 keys, 5655665 bytes, temperature: kUnknown
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622733622276, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 5655665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5627740, "index_size": 17063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 93986, "raw_average_key_size": 24, "raw_value_size": 5555911, "raw_average_value_size": 1427, "num_data_blocks": 726, "num_entries": 3892, "num_filter_entries": 3892, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.622661) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 5655665 bytes
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.624451) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.5 rd, 111.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 4.7 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(13.4) write-amplify(6.6) OK, records in: 4414, records dropped: 522 output_compression: NoCompression
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.624472) EVENT_LOG_v1 {"time_micros": 1764622733624460, "job": 22, "event": "compaction_finished", "compaction_time_micros": 50677, "compaction_time_cpu_micros": 17335, "output_level": 6, "num_output_files": 1, "total_output_size": 5655665, "num_input_records": 4414, "num_output_records": 3892, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622733624814, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622733625904, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.571530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.626045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.626053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.626056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.626059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:58:53 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-20:58:53.626062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 20:58:54 compute-0 ceph-mon[75880]: pgmap v920: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v921: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:56 compute-0 ceph-mon[75880]: pgmap v921: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:56 compute-0 nova_compute[244568]: 2025-12-01 20:58:56.511 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:56 compute-0 nova_compute[244568]: 2025-12-01 20:58:56.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:56 compute-0 nova_compute[244568]: 2025-12-01 20:58:56.958 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 01 20:58:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v922: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:56 compute-0 nova_compute[244568]: 2025-12-01 20:58:56.987 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 01 20:58:58 compute-0 ceph-mon[75880]: pgmap v922: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:58:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v923: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:58:58 compute-0 nova_compute[244568]: 2025-12-01 20:58:58.988 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:58:59 compute-0 nova_compute[244568]: 2025-12-01 20:58:59.958 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:00 compute-0 ceph-mon[75880]: pgmap v923: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v924: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:01 compute-0 podman[259628]: 2025-12-01 20:59:01.133330215 +0000 UTC m=+0.085481225 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 01 20:59:01 compute-0 nova_compute[244568]: 2025-12-01 20:59:01.980 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:01 compute-0 nova_compute[244568]: 2025-12-01 20:59:01.981 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 20:59:01 compute-0 nova_compute[244568]: 2025-12-01 20:59:01.981 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.016 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.016 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.017 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.018 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.018 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 20:59:02 compute-0 ceph-mon[75880]: pgmap v924: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 20:59:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2647858688' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:59:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 20:59:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2647858688' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v925: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.988 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.989 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.990 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.990 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 20:59:02 compute-0 nova_compute[244568]: 2025-12-01 20:59:02.991 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:59:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2647858688' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 20:59:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2647858688' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 20:59:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:59:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:59:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:59:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:59:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:59:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:59:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:59:03 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030561086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:59:03 compute-0 nova_compute[244568]: 2025-12-01 20:59:03.591 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:59:03 compute-0 nova_compute[244568]: 2025-12-01 20:59:03.821 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 20:59:03 compute-0 nova_compute[244568]: 2025-12-01 20:59:03.823 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5119MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 20:59:03 compute-0 nova_compute[244568]: 2025-12-01 20:59:03.824 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:59:03 compute-0 nova_compute[244568]: 2025-12-01 20:59:03.824 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.087 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.088 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 20:59:04 compute-0 ceph-mon[75880]: pgmap v925: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:04 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3030561086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.168 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Refreshing inventories for resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.270 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Updating ProviderTree inventory for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.270 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Updating inventory in ProviderTree for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.297 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Refreshing aggregate associations for resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.318 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Refreshing trait associations for resource provider 1adb778b-ac5d-48bb-abc3-c422b12ca516, traits: COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.330 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:59:04 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 20:59:04 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183061242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.906 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.912 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.935 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.937 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 20:59:04 compute-0 nova_compute[244568]: 2025-12-01 20:59:04.938 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:59:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v926: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:05 compute-0 podman[259692]: 2025-12-01 20:59:05.120046219 +0000 UTC m=+0.077847076 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 01 20:59:05 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2183061242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 20:59:05 compute-0 podman[259693]: 2025-12-01 20:59:05.165529532 +0000 UTC m=+0.116950939 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 20:59:05 compute-0 nova_compute[244568]: 2025-12-01 20:59:05.938 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:05 compute-0 nova_compute[244568]: 2025-12-01 20:59:05.939 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:05 compute-0 nova_compute[244568]: 2025-12-01 20:59:05.939 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:05 compute-0 nova_compute[244568]: 2025-12-01 20:59:05.968 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:06 compute-0 ceph-mon[75880]: pgmap v926: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v927: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:08 compute-0 ceph-mon[75880]: pgmap v927: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v928: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:10 compute-0 ceph-mon[75880]: pgmap v928: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v929: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:12 compute-0 ceph-mon[75880]: pgmap v929: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:12 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v930: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:14 compute-0 ceph-mon[75880]: pgmap v930: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:14 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v931: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:16 compute-0 ceph-mon[75880]: pgmap v931: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:16 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v932: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:18 compute-0 ceph-mon[75880]: pgmap v932: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:18 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v933: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:20 compute-0 ceph-mon[75880]: pgmap v933: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:20 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v934: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:22 compute-0 ceph-mon[75880]: pgmap v934: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:22 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v935: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:24 compute-0 ceph-mon[75880]: pgmap v935: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:24 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v936: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:26 compute-0 ceph-mon[75880]: pgmap v936: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:26 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v937: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:28 compute-0 nova_compute[244568]: 2025-12-01 20:59:28.410 244572 DEBUG oslo_concurrency.processutils [None req-547cad9d-df27-4c98-a864-20113c14e168 f260f4c8c3cd4a8b8fe0f58c46ce387c b5e99dcfb68146d4997d0b703469d5ea - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 20:59:28 compute-0 ceph-mon[75880]: pgmap v937: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:28 compute-0 nova_compute[244568]: 2025-12-01 20:59:28.441 244572 DEBUG oslo_concurrency.processutils [None req-547cad9d-df27-4c98-a864-20113c14e168 f260f4c8c3cd4a8b8fe0f58c46ce387c b5e99dcfb68146d4997d0b703469d5ea - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 20:59:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:28 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v938: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:30 compute-0 ceph-mon[75880]: pgmap v938: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:30 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v939: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:32 compute-0 podman[259738]: 2025-12-01 20:59:32.122090243 +0000 UTC m=+0.075922995 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 01 20:59:32 compute-0 ceph-mon[75880]: pgmap v939: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_20:59:32
Dec 01 20:59:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 20:59:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 20:59:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'images', 'cephfs.cephfs.data']
Dec 01 20:59:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 20:59:32 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v940: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:59:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 20:59:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:34 compute-0 ceph-mon[75880]: pgmap v940: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:34 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v941: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:36 compute-0 podman[259759]: 2025-12-01 20:59:36.12390577 +0000 UTC m=+0.078907680 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 01 20:59:36 compute-0 podman[259760]: 2025-12-01 20:59:36.143932236 +0000 UTC m=+0.098210093 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:59:36 compute-0 ceph-mon[75880]: pgmap v941: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:36 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:59:36.811 155855 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:ee:df', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2e:39:ea:af:48:04'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 01 20:59:36 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:59:36.814 155855 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 01 20:59:36 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v942: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:38 compute-0 ceph-mon[75880]: pgmap v942: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:38 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v943: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:40 compute-0 ceph-mon[75880]: pgmap v943: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 20:59:40 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v944: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:41 compute-0 sudo[259801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:59:41 compute-0 sudo[259801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:41 compute-0 sudo[259801]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:41 compute-0 sudo[259826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 20:59:41 compute-0 sudo[259826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:41 compute-0 sudo[259826]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:59:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:59:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 20:59:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:59:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 20:59:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:59:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 20:59:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:59:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 20:59:41 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:59:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 20:59:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:59:41 compute-0 sudo[259881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:59:41 compute-0 sudo[259881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:41 compute-0 sudo[259881]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:42 compute-0 sudo[259906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 20:59:42 compute-0 sudo[259906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:42 compute-0 podman[259944]: 2025-12-01 20:59:42.297855053 +0000 UTC m=+0.041186570 container create 6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_clarke, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:59:42 compute-0 systemd[1]: Started libpod-conmon-6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f.scope.
Dec 01 20:59:42 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:59:42 compute-0 podman[259944]: 2025-12-01 20:59:42.279299102 +0000 UTC m=+0.022630659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:59:42 compute-0 podman[259944]: 2025-12-01 20:59:42.396721545 +0000 UTC m=+0.140053112 container init 6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 01 20:59:42 compute-0 podman[259944]: 2025-12-01 20:59:42.410353401 +0000 UTC m=+0.153684918 container start 6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:59:42 compute-0 podman[259944]: 2025-12-01 20:59:42.413737977 +0000 UTC m=+0.157069544 container attach 6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:59:42 compute-0 sad_clarke[259960]: 167 167
Dec 01 20:59:42 compute-0 systemd[1]: libpod-6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f.scope: Deactivated successfully.
Dec 01 20:59:42 compute-0 podman[259944]: 2025-12-01 20:59:42.419063694 +0000 UTC m=+0.162395211 container died 6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7c13bd3833b4fef4f3f7a68df796e76ca2200ac35e1fc4248dc8102e01b0330-merged.mount: Deactivated successfully.
Dec 01 20:59:42 compute-0 podman[259944]: 2025-12-01 20:59:42.463557655 +0000 UTC m=+0.206889172 container remove 6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_clarke, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:59:42 compute-0 ceph-mon[75880]: pgmap v944: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:59:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 20:59:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:59:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 20:59:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 20:59:42 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 20:59:42 compute-0 systemd[1]: libpod-conmon-6b7125ee51df1fc1572194b8247702313ec497aef28c0c657fbb12fb8d42409f.scope: Deactivated successfully.
Dec 01 20:59:42 compute-0 podman[259985]: 2025-12-01 20:59:42.680072048 +0000 UTC m=+0.053754652 container create f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatterjee, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec 01 20:59:42 compute-0 systemd[1]: Started libpod-conmon-f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4.scope.
Dec 01 20:59:42 compute-0 podman[259985]: 2025-12-01 20:59:42.652588979 +0000 UTC m=+0.026271663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:59:42 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8921637b59b4d767b48525f3683383d0c1bced13d06fd989db85f1061e28e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8921637b59b4d767b48525f3683383d0c1bced13d06fd989db85f1061e28e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8921637b59b4d767b48525f3683383d0c1bced13d06fd989db85f1061e28e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8921637b59b4d767b48525f3683383d0c1bced13d06fd989db85f1061e28e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8921637b59b4d767b48525f3683383d0c1bced13d06fd989db85f1061e28e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:42 compute-0 podman[259985]: 2025-12-01 20:59:42.778967222 +0000 UTC m=+0.152649916 container init f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:59:42 compute-0 podman[259985]: 2025-12-01 20:59:42.793907769 +0000 UTC m=+0.167590383 container start f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatterjee, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:59:42 compute-0 podman[259985]: 2025-12-01 20:59:42.798650248 +0000 UTC m=+0.172332892 container attach f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:59:42 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v945: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:43 compute-0 nice_chatterjee[260002]: --> passed data devices: 0 physical, 3 LVM
Dec 01 20:59:43 compute-0 nice_chatterjee[260002]: --> All data devices are unavailable
Dec 01 20:59:43 compute-0 systemd[1]: libpod-f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4.scope: Deactivated successfully.
Dec 01 20:59:43 compute-0 podman[259985]: 2025-12-01 20:59:43.389112667 +0000 UTC m=+0.762795311 container died f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatterjee, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 20:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e8921637b59b4d767b48525f3683383d0c1bced13d06fd989db85f1061e28e4-merged.mount: Deactivated successfully.
Dec 01 20:59:43 compute-0 podman[259985]: 2025-12-01 20:59:43.448114503 +0000 UTC m=+0.821797147 container remove f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:59:43 compute-0 systemd[1]: libpod-conmon-f618ba0d91e4eec5b02fe10a00c3ca4d82f96eaac629867af064c5b8681e3eb4.scope: Deactivated successfully.
Dec 01 20:59:43 compute-0 sudo[259906]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:43 compute-0 sudo[260035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:59:43 compute-0 sudo[260035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:43 compute-0 sudo[260035]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:43 compute-0 sudo[260060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 20:59:43 compute-0 sudo[260060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:43 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:59:43.816 155855 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=84a1d907-d341-4608-b17a-1f738619ea16, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 01 20:59:43 compute-0 podman[260097]: 2025-12-01 20:59:43.963754372 +0000 UTC m=+0.070253658 container create fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:59:44 compute-0 systemd[1]: Started libpod-conmon-fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d.scope.
Dec 01 20:59:44 compute-0 podman[260097]: 2025-12-01 20:59:43.936171429 +0000 UTC m=+0.042670795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:59:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:59:44 compute-0 podman[260097]: 2025-12-01 20:59:44.080122322 +0000 UTC m=+0.186621648 container init fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:59:44 compute-0 podman[260097]: 2025-12-01 20:59:44.089229597 +0000 UTC m=+0.195728883 container start fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:59:44 compute-0 podman[260097]: 2025-12-01 20:59:44.093401157 +0000 UTC m=+0.199900493 container attach fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:59:44 compute-0 gifted_nash[260113]: 167 167
Dec 01 20:59:44 compute-0 systemd[1]: libpod-fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d.scope: Deactivated successfully.
Dec 01 20:59:44 compute-0 podman[260097]: 2025-12-01 20:59:44.096758372 +0000 UTC m=+0.203257678 container died fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 20:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-986e931874b4ee96231e7e181dd432e622e64b532d7766889d4a5673b6926137-merged.mount: Deactivated successfully.
Dec 01 20:59:44 compute-0 podman[260097]: 2025-12-01 20:59:44.14431582 +0000 UTC m=+0.250815116 container remove fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 01 20:59:44 compute-0 systemd[1]: libpod-conmon-fc839288946801c22d2fe36e46c8da5d483eb86824dc0a792f621ee974a9ac3d.scope: Deactivated successfully.
Dec 01 20:59:44 compute-0 podman[260136]: 2025-12-01 20:59:44.320398348 +0000 UTC m=+0.053721042 container create c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 20:59:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:59:44.363 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 20:59:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:59:44.365 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 20:59:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 20:59:44.365 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 20:59:44 compute-0 systemd[1]: Started libpod-conmon-c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf.scope.
Dec 01 20:59:44 compute-0 podman[260136]: 2025-12-01 20:59:44.290752521 +0000 UTC m=+0.024075255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:59:44 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5474d0c7e972d3d634c0e59fdda3d3123613e2213f234e5a1638118cc676734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5474d0c7e972d3d634c0e59fdda3d3123613e2213f234e5a1638118cc676734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5474d0c7e972d3d634c0e59fdda3d3123613e2213f234e5a1638118cc676734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5474d0c7e972d3d634c0e59fdda3d3123613e2213f234e5a1638118cc676734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:44 compute-0 podman[260136]: 2025-12-01 20:59:44.426721063 +0000 UTC m=+0.160043817 container init c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 01 20:59:44 compute-0 podman[260136]: 2025-12-01 20:59:44.440248946 +0000 UTC m=+0.173571650 container start c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 01 20:59:44 compute-0 podman[260136]: 2025-12-01 20:59:44.444996815 +0000 UTC m=+0.178319519 container attach c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_thompson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 01 20:59:44 compute-0 ceph-mon[75880]: pgmap v945: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:44 compute-0 laughing_thompson[260152]: {
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:     "0": [
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:         {
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "devices": [
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "/dev/loop3"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             ],
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_name": "ceph_lv0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_size": "21470642176",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "name": "ceph_lv0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "tags": {
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cluster_name": "ceph",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.crush_device_class": "",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.encrypted": "0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.objectstore": "bluestore",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osd_id": "0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.type": "block",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.vdo": "0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.with_tpm": "0"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             },
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "type": "block",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "vg_name": "ceph_vg0"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:         }
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:     ],
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:     "1": [
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:         {
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "devices": [
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "/dev/loop4"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             ],
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_name": "ceph_lv1",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_size": "21470642176",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "name": "ceph_lv1",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "tags": {
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cluster_name": "ceph",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.crush_device_class": "",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.encrypted": "0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.objectstore": "bluestore",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osd_id": "1",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.type": "block",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.vdo": "0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.with_tpm": "0"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             },
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "type": "block",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "vg_name": "ceph_vg1"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:         }
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:     ],
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:     "2": [
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:         {
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "devices": [
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "/dev/loop5"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             ],
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_name": "ceph_lv2",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_size": "21470642176",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "name": "ceph_lv2",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "tags": {
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.cluster_name": "ceph",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.crush_device_class": "",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.encrypted": "0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.objectstore": "bluestore",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osd_id": "2",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.type": "block",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.vdo": "0",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:                 "ceph.with_tpm": "0"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             },
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "type": "block",
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:             "vg_name": "ceph_vg2"
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:         }
Dec 01 20:59:44 compute-0 laughing_thompson[260152]:     ]
Dec 01 20:59:44 compute-0 laughing_thompson[260152]: }
Dec 01 20:59:44 compute-0 systemd[1]: libpod-c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf.scope: Deactivated successfully.
Dec 01 20:59:44 compute-0 podman[260136]: 2025-12-01 20:59:44.801706743 +0000 UTC m=+0.535029427 container died c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_thompson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 20:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5474d0c7e972d3d634c0e59fdda3d3123613e2213f234e5a1638118cc676734-merged.mount: Deactivated successfully.
Dec 01 20:59:44 compute-0 podman[260136]: 2025-12-01 20:59:44.861348829 +0000 UTC m=+0.594671513 container remove c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 20:59:44 compute-0 systemd[1]: libpod-conmon-c6b31aa841c498d374660a089bd9f69df2c4d3b98ff5d0b7d775825a4dd8e1cf.scope: Deactivated successfully.
Dec 01 20:59:44 compute-0 sudo[260060]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:44 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v946: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:45 compute-0 sudo[260173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 20:59:45 compute-0 sudo[260173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:45 compute-0 sudo[260173]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:45 compute-0 sudo[260198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 20:59:45 compute-0 sudo[260198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:45 compute-0 podman[260236]: 2025-12-01 20:59:45.357654453 +0000 UTC m=+0.045318038 container create 5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shtern, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 20:59:45 compute-0 systemd[1]: Started libpod-conmon-5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca.scope.
Dec 01 20:59:45 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:59:45 compute-0 podman[260236]: 2025-12-01 20:59:45.337884575 +0000 UTC m=+0.025548160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:59:45 compute-0 podman[260236]: 2025-12-01 20:59:45.433105303 +0000 UTC m=+0.120768868 container init 5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 01 20:59:45 compute-0 podman[260236]: 2025-12-01 20:59:45.441836487 +0000 UTC m=+0.129500062 container start 5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 01 20:59:45 compute-0 podman[260236]: 2025-12-01 20:59:45.446251704 +0000 UTC m=+0.133915269 container attach 5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 01 20:59:45 compute-0 elated_shtern[260252]: 167 167
Dec 01 20:59:45 compute-0 systemd[1]: libpod-5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca.scope: Deactivated successfully.
Dec 01 20:59:45 compute-0 podman[260236]: 2025-12-01 20:59:45.447877995 +0000 UTC m=+0.135541560 container died 5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 20:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-de8bb603ed581243d7a98fb95de454848f5b22ec4dbada603ab6b5f729a5f50f-merged.mount: Deactivated successfully.
Dec 01 20:59:45 compute-0 podman[260236]: 2025-12-01 20:59:45.488801455 +0000 UTC m=+0.176465050 container remove 5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 01 20:59:45 compute-0 systemd[1]: libpod-conmon-5dec234d536235ec7d8fbfbd04039cf5b7bb23b14ea12ba50e488776bdcb70ca.scope: Deactivated successfully.
Dec 01 20:59:45 compute-0 podman[260275]: 2025-12-01 20:59:45.669450606 +0000 UTC m=+0.037231975 container create e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 20:59:45 compute-0 systemd[1]: Started libpod-conmon-e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50.scope.
Dec 01 20:59:45 compute-0 systemd[1]: Started libcrun container.
Dec 01 20:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16d6bec3fc22a37aa2aa1266dcbe25d60dc18ddebd0cc217c10bae51d8dfa41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16d6bec3fc22a37aa2aa1266dcbe25d60dc18ddebd0cc217c10bae51d8dfa41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16d6bec3fc22a37aa2aa1266dcbe25d60dc18ddebd0cc217c10bae51d8dfa41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16d6bec3fc22a37aa2aa1266dcbe25d60dc18ddebd0cc217c10bae51d8dfa41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 20:59:45 compute-0 podman[260275]: 2025-12-01 20:59:45.741980024 +0000 UTC m=+0.109761473 container init e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 01 20:59:45 compute-0 podman[260275]: 2025-12-01 20:59:45.653614611 +0000 UTC m=+0.021396010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 20:59:45 compute-0 podman[260275]: 2025-12-01 20:59:45.756155738 +0000 UTC m=+0.123937107 container start e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 01 20:59:45 compute-0 podman[260275]: 2025-12-01 20:59:45.759563425 +0000 UTC m=+0.127344824 container attach e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 20:59:46 compute-0 lvm[260370]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 20:59:46 compute-0 lvm[260371]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 20:59:46 compute-0 lvm[260371]: VG ceph_vg1 finished
Dec 01 20:59:46 compute-0 lvm[260370]: VG ceph_vg0 finished
Dec 01 20:59:46 compute-0 lvm[260373]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:59:46 compute-0 lvm[260373]: VG ceph_vg2 finished
Dec 01 20:59:46 compute-0 ceph-mon[75880]: pgmap v946: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:46 compute-0 lvm[260375]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:59:46 compute-0 lvm[260375]: VG ceph_vg2 finished
Dec 01 20:59:46 compute-0 elegant_feynman[260292]: {}
Dec 01 20:59:46 compute-0 systemd[1]: libpod-e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50.scope: Deactivated successfully.
Dec 01 20:59:46 compute-0 lvm[260377]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 20:59:46 compute-0 lvm[260377]: VG ceph_vg2 finished
Dec 01 20:59:46 compute-0 podman[260275]: 2025-12-01 20:59:46.590503376 +0000 UTC m=+0.958284755 container died e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 20:59:46 compute-0 systemd[1]: libpod-e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50.scope: Consumed 1.364s CPU time.
Dec 01 20:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a16d6bec3fc22a37aa2aa1266dcbe25d60dc18ddebd0cc217c10bae51d8dfa41-merged.mount: Deactivated successfully.
Dec 01 20:59:46 compute-0 podman[260275]: 2025-12-01 20:59:46.649963516 +0000 UTC m=+1.017744935 container remove e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 20:59:46 compute-0 systemd[1]: libpod-conmon-e9021b213225a0f443002c546068ed7265ae6952b1db6bd673b814161c2fac50.scope: Deactivated successfully.
Dec 01 20:59:46 compute-0 sudo[260198]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 20:59:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:59:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 20:59:46 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:59:46 compute-0 sudo[260391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 20:59:46 compute-0 sudo[260391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 20:59:46 compute-0 sudo[260391]: pam_unix(sudo:session): session closed for user root
Dec 01 20:59:46 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v947: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:59:47 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 20:59:47 compute-0 ceph-mon[75880]: pgmap v947: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:48 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v948: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:50 compute-0 ceph-mon[75880]: pgmap v948: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:50 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v949: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:52 compute-0 ceph-mon[75880]: pgmap v949: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:52 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v950: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:54 compute-0 ceph-mon[75880]: pgmap v950: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:54 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v951: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:56 compute-0 ceph-mon[75880]: pgmap v951: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:56 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v952: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:57 compute-0 nova_compute[244568]: 2025-12-01 20:59:57.970 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 20:59:58 compute-0 ceph-mon[75880]: pgmap v952: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 20:59:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 20:59:58 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v953: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:00 compute-0 ceph-mon[75880]: pgmap v953: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:00 compute-0 nova_compute[244568]: 2025-12-01 21:00:00.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:00 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v954: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:02 compute-0 ceph-mon[75880]: pgmap v954: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 21:00:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483239464' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:00:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 21:00:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2483239464' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:00:02 compute-0 nova_compute[244568]: 2025-12-01 21:00:02.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:02 compute-0 nova_compute[244568]: 2025-12-01 21:00:02.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 21:00:02 compute-0 nova_compute[244568]: 2025-12-01 21:00:02.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 21:00:02 compute-0 nova_compute[244568]: 2025-12-01 21:00:02.972 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 21:00:02 compute-0 nova_compute[244568]: 2025-12-01 21:00:02.972 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:02 compute-0 nova_compute[244568]: 2025-12-01 21:00:02.972 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 21:00:02 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v955: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:03 compute-0 podman[260416]: 2025-12-01 21:00:03.097878057 +0000 UTC m=+0.056246601 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 01 21:00:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:00:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:00:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:00:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:00:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:00:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:00:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2483239464' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:00:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/2483239464' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:00:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:03 compute-0 nova_compute[244568]: 2025-12-01 21:00:03.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:03 compute-0 nova_compute[244568]: 2025-12-01 21:00:03.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:04 compute-0 ceph-mon[75880]: pgmap v955: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.985 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.985 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.985 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.985 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 21:00:04 compute-0 nova_compute[244568]: 2025-12-01 21:00:04.986 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:00:04 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v956: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:05 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:00:05 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857790371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.523 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:00:05 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1857790371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.711 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.712 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5127MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.712 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.713 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.808 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.809 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 21:00:05 compute-0 nova_compute[244568]: 2025-12-01 21:00:05.829 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:00:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:00:06 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22242369' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:00:06 compute-0 nova_compute[244568]: 2025-12-01 21:00:06.367 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:00:06 compute-0 nova_compute[244568]: 2025-12-01 21:00:06.374 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 21:00:06 compute-0 nova_compute[244568]: 2025-12-01 21:00:06.401 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 21:00:06 compute-0 nova_compute[244568]: 2025-12-01 21:00:06.402 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 21:00:06 compute-0 nova_compute[244568]: 2025-12-01 21:00:06.402 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:00:06 compute-0 ceph-mon[75880]: pgmap v956: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:06 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/22242369' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:00:06 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v957: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:07 compute-0 podman[260480]: 2025-12-01 21:00:07.117740097 +0000 UTC m=+0.067657777 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 01 21:00:07 compute-0 podman[260481]: 2025-12-01 21:00:07.156001994 +0000 UTC m=+0.109926209 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 21:00:08 compute-0 ceph-mon[75880]: pgmap v957: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:08 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v958: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:09 compute-0 ceph-mon[75880]: pgmap v958: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:10 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v959: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:12 compute-0 ceph-mon[75880]: pgmap v959: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:13 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v960: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:14 compute-0 ceph-mon[75880]: pgmap v960: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:15 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v961: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:16 compute-0 ceph-mon[75880]: pgmap v961: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:17 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v962: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:18 compute-0 ceph-mon[75880]: pgmap v962: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:19 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v963: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:20 compute-0 ceph-mon[75880]: pgmap v963: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:21 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v964: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:22 compute-0 ceph-mon[75880]: pgmap v964: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:23 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v965: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:24 compute-0 ceph-mon[75880]: pgmap v965: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:25 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v966: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:26 compute-0 ceph-mon[75880]: pgmap v966: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:27 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v967: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:28 compute-0 ceph-mon[75880]: pgmap v967: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:29 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v968: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:30 compute-0 ceph-mon[75880]: pgmap v968: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:31 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v969: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:32 compute-0 ceph-mon[75880]: pgmap v969: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_21:00:32
Dec 01 21:00:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 21:00:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 21:00:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.data', 'images', 'backups']
Dec 01 21:00:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v970: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:00:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:00:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:34 compute-0 podman[260527]: 2025-12-01 21:00:34.122209527 +0000 UTC m=+0.076390551 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 21:00:34 compute-0 ceph-mon[75880]: pgmap v970: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:35 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v971: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:36 compute-0 ceph-mon[75880]: pgmap v971: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:37 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v972: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:38 compute-0 podman[260549]: 2025-12-01 21:00:38.165949174 +0000 UTC m=+0.112122448 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 01 21:00:38 compute-0 podman[260550]: 2025-12-01 21:00:38.180227011 +0000 UTC m=+0.120270123 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:00:38 compute-0 ceph-mon[75880]: pgmap v972: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:39 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v973: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:40 compute-0 ceph-mon[75880]: pgmap v973: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:00:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:00:41 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v974: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:42 compute-0 ceph-mon[75880]: pgmap v974: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:43 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v975: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:44 compute-0 ceph-mon[75880]: pgmap v975: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:00:44.364 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:00:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:00:44.364 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:00:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:00:44.365 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:00:45 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v976: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:46 compute-0 ceph-mon[75880]: pgmap v976: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:46 compute-0 sudo[260594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:00:46 compute-0 sudo[260594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:46 compute-0 sudo[260594]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:46 compute-0 sudo[260619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 21:00:46 compute-0 sudo[260619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:47 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v977: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:47 compute-0 sudo[260619]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 21:00:47 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:00:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 21:00:47 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 21:00:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 21:00:47 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:00:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 21:00:47 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 21:00:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 21:00:47 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 21:00:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 21:00:47 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:00:47 compute-0 sudo[260675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:00:47 compute-0 sudo[260675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:47 compute-0 sudo[260675]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:47 compute-0 sudo[260700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 21:00:47 compute-0 sudo[260700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:47 compute-0 podman[260737]: 2025-12-01 21:00:47.989446483 +0000 UTC m=+0.046686422 container create 0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rhodes, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 01 21:00:48 compute-0 systemd[1]: Started libpod-conmon-0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600.scope.
Dec 01 21:00:48 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:00:48 compute-0 podman[260737]: 2025-12-01 21:00:47.971269924 +0000 UTC m=+0.028509873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:00:48 compute-0 podman[260737]: 2025-12-01 21:00:48.07824043 +0000 UTC m=+0.135480409 container init 0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:00:48 compute-0 podman[260737]: 2025-12-01 21:00:48.088458479 +0000 UTC m=+0.145698418 container start 0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rhodes, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 21:00:48 compute-0 podman[260737]: 2025-12-01 21:00:48.092760494 +0000 UTC m=+0.150000613 container attach 0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rhodes, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 21:00:48 compute-0 systemd[1]: libpod-0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600.scope: Deactivated successfully.
Dec 01 21:00:48 compute-0 cranky_rhodes[260753]: 167 167
Dec 01 21:00:48 compute-0 conmon[260753]: conmon 0261adfa26dc271b3a19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600.scope/container/memory.events
Dec 01 21:00:48 compute-0 podman[260737]: 2025-12-01 21:00:48.099479794 +0000 UTC m=+0.156719733 container died 0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:00:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a33766efd2b8d0c493e9525a399a7ea31309412426eedd4bb58dfc048f7fa52-merged.mount: Deactivated successfully.
Dec 01 21:00:48 compute-0 podman[260737]: 2025-12-01 21:00:48.160122311 +0000 UTC m=+0.217362250 container remove 0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_rhodes, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:00:48 compute-0 systemd[1]: libpod-conmon-0261adfa26dc271b3a19e5c792579eda757cbabd8fbe16b1d5e9ea7d43ca8600.scope: Deactivated successfully.
Dec 01 21:00:48 compute-0 ceph-mon[75880]: pgmap v977: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:00:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 21:00:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:00:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 21:00:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 21:00:48 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:00:48 compute-0 podman[260777]: 2025-12-01 21:00:48.370913075 +0000 UTC m=+0.073653925 container create 09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 01 21:00:48 compute-0 systemd[1]: Started libpod-conmon-09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130.scope.
Dec 01 21:00:48 compute-0 podman[260777]: 2025-12-01 21:00:48.344135777 +0000 UTC m=+0.046876697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:00:48 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1e63b1188b2be06f3455890dc6de4495ab6b6095f56e6670530136c7f735e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1e63b1188b2be06f3455890dc6de4495ab6b6095f56e6670530136c7f735e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1e63b1188b2be06f3455890dc6de4495ab6b6095f56e6670530136c7f735e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1e63b1188b2be06f3455890dc6de4495ab6b6095f56e6670530136c7f735e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e1e63b1188b2be06f3455890dc6de4495ab6b6095f56e6670530136c7f735e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:48 compute-0 podman[260777]: 2025-12-01 21:00:48.472319986 +0000 UTC m=+0.175060856 container init 09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 01 21:00:48 compute-0 podman[260777]: 2025-12-01 21:00:48.482435023 +0000 UTC m=+0.185175863 container start 09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hawking, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:00:48 compute-0 podman[260777]: 2025-12-01 21:00:48.489736261 +0000 UTC m=+0.192477131 container attach 09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 21:00:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:49 compute-0 suspicious_hawking[260794]: --> passed data devices: 0 physical, 3 LVM
Dec 01 21:00:49 compute-0 suspicious_hawking[260794]: --> All data devices are unavailable
Dec 01 21:00:49 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v978: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:49 compute-0 systemd[1]: libpod-09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130.scope: Deactivated successfully.
Dec 01 21:00:49 compute-0 podman[260777]: 2025-12-01 21:00:49.036171494 +0000 UTC m=+0.738912354 container died 09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hawking, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 21:00:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9e1e63b1188b2be06f3455890dc6de4495ab6b6095f56e6670530136c7f735e-merged.mount: Deactivated successfully.
Dec 01 21:00:49 compute-0 podman[260777]: 2025-12-01 21:00:49.08145206 +0000 UTC m=+0.784192900 container remove 09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hawking, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 01 21:00:49 compute-0 systemd[1]: libpod-conmon-09a3a2d3bbd7dfe3be266c0e1fc1423431cff344f064254b89ebb084b5c5c130.scope: Deactivated successfully.
Dec 01 21:00:49 compute-0 sudo[260700]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:49 compute-0 sudo[260827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:00:49 compute-0 sudo[260827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:49 compute-0 sudo[260827]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:49 compute-0 sudo[260852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 21:00:49 compute-0 sudo[260852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:49 compute-0 podman[260889]: 2025-12-01 21:00:49.544852345 +0000 UTC m=+0.046384981 container create 7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:00:49 compute-0 systemd[1]: Started libpod-conmon-7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d.scope.
Dec 01 21:00:49 compute-0 podman[260889]: 2025-12-01 21:00:49.521275428 +0000 UTC m=+0.022808104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:00:49 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:00:49 compute-0 podman[260889]: 2025-12-01 21:00:49.636361068 +0000 UTC m=+0.137893714 container init 7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:00:49 compute-0 podman[260889]: 2025-12-01 21:00:49.644347518 +0000 UTC m=+0.145880154 container start 7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:00:49 compute-0 podman[260889]: 2025-12-01 21:00:49.647886888 +0000 UTC m=+0.149419544 container attach 7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 01 21:00:49 compute-0 systemd[1]: libpod-7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d.scope: Deactivated successfully.
Dec 01 21:00:49 compute-0 mystifying_morse[260905]: 167 167
Dec 01 21:00:49 compute-0 podman[260889]: 2025-12-01 21:00:49.650330445 +0000 UTC m=+0.151863101 container died 7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:00:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5966c3e97a85bc87e7f96828240040da134a42c57d2b69c760ff9cd17f63784-merged.mount: Deactivated successfully.
Dec 01 21:00:49 compute-0 podman[260889]: 2025-12-01 21:00:49.699576215 +0000 UTC m=+0.201108841 container remove 7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 01 21:00:49 compute-0 systemd[1]: libpod-conmon-7601982328b43709b1456d4e0976a40edd95076cd724f4faa9b197be5fabbb5d.scope: Deactivated successfully.
Dec 01 21:00:49 compute-0 podman[260929]: 2025-12-01 21:00:49.887135992 +0000 UTC m=+0.051919335 container create 0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 01 21:00:49 compute-0 systemd[1]: Started libpod-conmon-0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5.scope.
Dec 01 21:00:49 compute-0 podman[260929]: 2025-12-01 21:00:49.861907093 +0000 UTC m=+0.026690516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:00:49 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0a1a7f93cb0add70341d04a8f26a8e402a7d897cfbea13a60d1d57d352edac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0a1a7f93cb0add70341d04a8f26a8e402a7d897cfbea13a60d1d57d352edac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0a1a7f93cb0add70341d04a8f26a8e402a7d897cfbea13a60d1d57d352edac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0a1a7f93cb0add70341d04a8f26a8e402a7d897cfbea13a60d1d57d352edac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:49 compute-0 podman[260929]: 2025-12-01 21:00:49.981546375 +0000 UTC m=+0.146329758 container init 0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 01 21:00:49 compute-0 podman[260929]: 2025-12-01 21:00:49.995535493 +0000 UTC m=+0.160318866 container start 0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:00:50 compute-0 podman[260929]: 2025-12-01 21:00:50.001008004 +0000 UTC m=+0.165791367 container attach 0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 21:00:50 compute-0 ceph-mon[75880]: pgmap v978: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:50 compute-0 reverent_haslett[260944]: {
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:     "0": [
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:         {
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "devices": [
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "/dev/loop3"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             ],
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_name": "ceph_lv0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_size": "21470642176",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "name": "ceph_lv0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "tags": {
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cluster_name": "ceph",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.crush_device_class": "",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.encrypted": "0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.objectstore": "bluestore",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osd_id": "0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.type": "block",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.vdo": "0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.with_tpm": "0"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             },
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "type": "block",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "vg_name": "ceph_vg0"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:         }
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:     ],
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:     "1": [
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:         {
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "devices": [
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "/dev/loop4"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             ],
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_name": "ceph_lv1",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_size": "21470642176",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "name": "ceph_lv1",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "tags": {
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cluster_name": "ceph",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.crush_device_class": "",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.encrypted": "0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.objectstore": "bluestore",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osd_id": "1",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.type": "block",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.vdo": "0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.with_tpm": "0"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             },
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "type": "block",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "vg_name": "ceph_vg1"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:         }
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:     ],
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:     "2": [
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:         {
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "devices": [
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "/dev/loop5"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             ],
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_name": "ceph_lv2",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_size": "21470642176",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "name": "ceph_lv2",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "tags": {
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.cluster_name": "ceph",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.crush_device_class": "",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.encrypted": "0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.objectstore": "bluestore",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osd_id": "2",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.type": "block",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.vdo": "0",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:                 "ceph.with_tpm": "0"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             },
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "type": "block",
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:             "vg_name": "ceph_vg2"
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:         }
Dec 01 21:00:50 compute-0 reverent_haslett[260944]:     ]
Dec 01 21:00:50 compute-0 reverent_haslett[260944]: }
Dec 01 21:00:50 compute-0 systemd[1]: libpod-0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5.scope: Deactivated successfully.
Dec 01 21:00:50 compute-0 podman[260929]: 2025-12-01 21:00:50.339348447 +0000 UTC m=+0.504131850 container died 0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 01 21:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed0a1a7f93cb0add70341d04a8f26a8e402a7d897cfbea13a60d1d57d352edac-merged.mount: Deactivated successfully.
Dec 01 21:00:50 compute-0 podman[260929]: 2025-12-01 21:00:50.39284684 +0000 UTC m=+0.557630173 container remove 0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:00:50 compute-0 systemd[1]: libpod-conmon-0bcb8f5732582b673c222fabc1ffe0eb310b57adf0a750ca48e0479892781af5.scope: Deactivated successfully.
Dec 01 21:00:50 compute-0 sudo[260852]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:50 compute-0 sudo[260967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:00:50 compute-0 sudo[260967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:50 compute-0 sudo[260967]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:50 compute-0 sudo[260992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 21:00:50 compute-0 sudo[260992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:50 compute-0 podman[261029]: 2025-12-01 21:00:50.990120633 +0000 UTC m=+0.051042557 container create e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 01 21:00:51 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v979: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:51 compute-0 systemd[1]: Started libpod-conmon-e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0.scope.
Dec 01 21:00:51 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:00:51 compute-0 podman[261029]: 2025-12-01 21:00:50.972724109 +0000 UTC m=+0.033646043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:00:51 compute-0 podman[261029]: 2025-12-01 21:00:51.068699531 +0000 UTC m=+0.129621476 container init e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_faraday, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 01 21:00:51 compute-0 podman[261029]: 2025-12-01 21:00:51.078955482 +0000 UTC m=+0.139877406 container start e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:00:51 compute-0 podman[261029]: 2025-12-01 21:00:51.082017728 +0000 UTC m=+0.142939682 container attach e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_faraday, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 21:00:51 compute-0 festive_faraday[261045]: 167 167
Dec 01 21:00:51 compute-0 systemd[1]: libpod-e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0.scope: Deactivated successfully.
Dec 01 21:00:51 compute-0 podman[261029]: 2025-12-01 21:00:51.086106756 +0000 UTC m=+0.147028680 container died e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_faraday, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 21:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb9ea83277c146c7a19ebe06c4c4fce2dea861baf7dd3ab679f6425f081532b7-merged.mount: Deactivated successfully.
Dec 01 21:00:51 compute-0 podman[261029]: 2025-12-01 21:00:51.123424003 +0000 UTC m=+0.184345927 container remove e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_faraday, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 01 21:00:51 compute-0 systemd[1]: libpod-conmon-e801e129ae91cbf50f14e465d2f95cd7b66eed1a2fd1a7c6acd7e3f37f9a4df0.scope: Deactivated successfully.
Dec 01 21:00:51 compute-0 podman[261069]: 2025-12-01 21:00:51.31321552 +0000 UTC m=+0.051992548 container create 4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 01 21:00:51 compute-0 systemd[1]: Started libpod-conmon-4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427.scope.
Dec 01 21:00:51 compute-0 podman[261069]: 2025-12-01 21:00:51.292713959 +0000 UTC m=+0.031491027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:00:51 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a3a5ba3c56d157686ff1499887426150d5a5cd20bcd20177bdbe3bf6dc9fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a3a5ba3c56d157686ff1499887426150d5a5cd20bcd20177bdbe3bf6dc9fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a3a5ba3c56d157686ff1499887426150d5a5cd20bcd20177bdbe3bf6dc9fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a3a5ba3c56d157686ff1499887426150d5a5cd20bcd20177bdbe3bf6dc9fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:00:51 compute-0 podman[261069]: 2025-12-01 21:00:51.421661052 +0000 UTC m=+0.160438110 container init 4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:00:51 compute-0 podman[261069]: 2025-12-01 21:00:51.429752345 +0000 UTC m=+0.168529353 container start 4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:00:51 compute-0 podman[261069]: 2025-12-01 21:00:51.434133202 +0000 UTC m=+0.172910230 container attach 4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 01 21:00:52 compute-0 lvm[261164]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 21:00:52 compute-0 lvm[261165]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 21:00:52 compute-0 lvm[261164]: VG ceph_vg0 finished
Dec 01 21:00:52 compute-0 lvm[261165]: VG ceph_vg1 finished
Dec 01 21:00:52 compute-0 lvm[261167]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 21:00:52 compute-0 lvm[261167]: VG ceph_vg2 finished
Dec 01 21:00:52 compute-0 ceph-mon[75880]: pgmap v979: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:52 compute-0 fervent_jackson[261086]: {}
Dec 01 21:00:52 compute-0 systemd[1]: libpod-4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427.scope: Deactivated successfully.
Dec 01 21:00:52 compute-0 systemd[1]: libpod-4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427.scope: Consumed 1.545s CPU time.
Dec 01 21:00:52 compute-0 podman[261069]: 2025-12-01 21:00:52.337030965 +0000 UTC m=+1.075807983 container died 4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:00:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-be5a3a5ba3c56d157686ff1499887426150d5a5cd20bcd20177bdbe3bf6dc9fc-merged.mount: Deactivated successfully.
Dec 01 21:00:52 compute-0 podman[261069]: 2025-12-01 21:00:52.389278559 +0000 UTC m=+1.128055597 container remove 4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 21:00:52 compute-0 systemd[1]: libpod-conmon-4e75c1be7bc4ebaafb7fbc34d72e5cf293acda9ba17396cc054b748ee6213427.scope: Deactivated successfully.
Dec 01 21:00:52 compute-0 sudo[260992]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 21:00:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:00:52 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 21:00:52 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:00:52 compute-0 sudo[261180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 21:00:52 compute-0 sudo[261180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:00:52 compute-0 sudo[261180]: pam_unix(sudo:session): session closed for user root
Dec 01 21:00:53 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v980: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:00:53 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:00:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:54 compute-0 ceph-mon[75880]: pgmap v980: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:55 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v981: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:56 compute-0 ceph-mon[75880]: pgmap v981: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:57 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v982: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:58 compute-0 ceph-mon[75880]: pgmap v982: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:00:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:00:59 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v983: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:00 compute-0 ceph-mon[75880]: pgmap v983: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:01 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v984: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:01 compute-0 CROND[261206]: (root) CMD (run-parts /etc/cron.hourly)
Dec 01 21:01:01 compute-0 run-parts[261209]: (/etc/cron.hourly) starting 0anacron
Dec 01 21:01:01 compute-0 nova_compute[244568]: 2025-12-01 21:01:01.399 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:01 compute-0 run-parts[261215]: (/etc/cron.hourly) finished 0anacron
Dec 01 21:01:01 compute-0 CROND[261205]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 01 21:01:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 21:01:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717159900' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:01:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 21:01:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3717159900' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:01:02 compute-0 ceph-mon[75880]: pgmap v984: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3717159900' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:01:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3717159900' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:01:02 compute-0 nova_compute[244568]: 2025-12-01 21:01:02.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:02 compute-0 nova_compute[244568]: 2025-12-01 21:01:02.958 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 21:01:02 compute-0 nova_compute[244568]: 2025-12-01 21:01:02.959 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 21:01:02 compute-0 nova_compute[244568]: 2025-12-01 21:01:02.974 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 21:01:02 compute-0 nova_compute[244568]: 2025-12-01 21:01:02.974 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:02 compute-0 nova_compute[244568]: 2025-12-01 21:01:02.975 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:02 compute-0 nova_compute[244568]: 2025-12-01 21:01:02.975 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 21:01:03 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v985: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:01:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:01:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:01:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:01:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:01:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:01:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:03 compute-0 nova_compute[244568]: 2025-12-01 21:01:03.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:04 compute-0 ceph-mon[75880]: pgmap v985: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:04 compute-0 nova_compute[244568]: 2025-12-01 21:01:04.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:05 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v986: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:05 compute-0 podman[261216]: 2025-12-01 21:01:05.113038727 +0000 UTC m=+0.069636220 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 01 21:01:06 compute-0 ceph-mon[75880]: pgmap v986: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.956 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.984 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.985 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.985 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.985 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 21:01:06 compute-0 nova_compute[244568]: 2025-12-01 21:01:06.985 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:01:07 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v987: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:01:07 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603100763' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.514 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:01:07 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2603100763' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.667 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.668 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5097MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.668 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.668 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.722 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.722 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 21:01:07 compute-0 nova_compute[244568]: 2025-12-01 21:01:07.740 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:01:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:01:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658689197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:01:08 compute-0 nova_compute[244568]: 2025-12-01 21:01:08.277 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:01:08 compute-0 nova_compute[244568]: 2025-12-01 21:01:08.284 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 21:01:08 compute-0 nova_compute[244568]: 2025-12-01 21:01:08.352 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 21:01:08 compute-0 nova_compute[244568]: 2025-12-01 21:01:08.354 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 21:01:08 compute-0 nova_compute[244568]: 2025-12-01 21:01:08.354 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:01:08 compute-0 ceph-mon[75880]: pgmap v987: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:08 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2658689197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:01:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:09 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v988: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:09 compute-0 podman[261279]: 2025-12-01 21:01:09.129679928 +0000 UTC m=+0.080982525 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 01 21:01:09 compute-0 podman[261280]: 2025-12-01 21:01:09.218727103 +0000 UTC m=+0.165917291 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 01 21:01:10 compute-0 nova_compute[244568]: 2025-12-01 21:01:10.351 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:01:10 compute-0 ceph-mon[75880]: pgmap v988: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:11 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v989: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:12 compute-0 ceph-mon[75880]: pgmap v989: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:13 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v990: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:13 compute-0 ceph-mon[75880]: pgmap v990: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:01:14 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4600 writes, 20K keys, 4600 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4600 writes, 4600 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1495 writes, 6784 keys, 1495 commit groups, 1.0 writes per commit group, ingest: 6.49 MB, 0.01 MB/s
                                           Interval WAL: 1495 writes, 1495 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.3      0.18              0.05        11    0.016       0      0       0.0       0.0
                                             L6      1/0    5.39 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.2    109.4     90.4      0.57              0.17        10    0.057     38K   5295       0.0       0.0
                                            Sum      1/0    5.39 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.2     83.0     90.2      0.75              0.22        21    0.036     38K   5295       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.5    117.0    119.7      0.27              0.11        10    0.027     21K   3026       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    109.4     90.4      0.57              0.17        10    0.057     38K   5295       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.9      0.18              0.05        10    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.04 MB/s write, 0.06 GB read, 0.03 MB/s read, 0.7 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a3cf2218d0#2 capacity: 308.00 MB usage: 5.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(548,5.48 MB,1.78019%) FilterBlock(22,114.48 KB,0.0362991%) IndexBlock(22,220.48 KB,0.069908%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 01 21:01:15 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v991: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:16 compute-0 ceph-mon[75880]: pgmap v991: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:17 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v992: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:18 compute-0 ceph-mon[75880]: pgmap v992: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:19 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v993: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:20 compute-0 ceph-mon[75880]: pgmap v993: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.117129) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622880117343, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1395, "num_deletes": 251, "total_data_size": 1457661, "memory_usage": 1484832, "flush_reason": "Manual Compaction"}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622880132019, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1426817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19264, "largest_seqno": 20658, "table_properties": {"data_size": 1420353, "index_size": 3667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13566, "raw_average_key_size": 19, "raw_value_size": 1407287, "raw_average_value_size": 2051, "num_data_blocks": 168, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764622734, "oldest_key_time": 1764622734, "file_creation_time": 1764622880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14840 microseconds, and 8589 cpu microseconds.
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.132084) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1426817 bytes OK
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.132113) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.134004) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.134024) EVENT_LOG_v1 {"time_micros": 1764622880134017, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.134046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1451485, prev total WAL file size 1451485, number of live WAL files 2.
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.135000) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1393KB)], [47(5523KB)]
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622880135048, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 7082482, "oldest_snapshot_seqno": -1}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4064 keys, 5845845 bytes, temperature: kUnknown
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622880175534, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 5845845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5816775, "index_size": 17824, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 98125, "raw_average_key_size": 24, "raw_value_size": 5741819, "raw_average_value_size": 1412, "num_data_blocks": 756, "num_entries": 4064, "num_filter_entries": 4064, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764621072, "oldest_key_time": 0, "file_creation_time": 1764622880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9d350523-84d7-4671-b42c-85f993c10d4b", "db_session_id": "KWBZEPI2BH59SPE6E2I4", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.175854) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 5845845 bytes
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.177523) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.5 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 5.4 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(9.1) write-amplify(4.1) OK, records in: 4578, records dropped: 514 output_compression: NoCompression
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.177551) EVENT_LOG_v1 {"time_micros": 1764622880177538, "job": 24, "event": "compaction_finished", "compaction_time_micros": 40582, "compaction_time_cpu_micros": 18542, "output_level": 6, "num_output_files": 1, "total_output_size": 5845845, "num_input_records": 4578, "num_output_records": 4064, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622880178145, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764622880180090, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.134883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.180358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.180380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.180382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.180385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 21:01:20 compute-0 ceph-mon[75880]: rocksdb: (Original Log Time 2025/12/01-21:01:20.180387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 01 21:01:21 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v994: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:22 compute-0 ceph-mon[75880]: pgmap v994: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:23 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v995: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:24 compute-0 ceph-mon[75880]: pgmap v995: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:25 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v996: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:26 compute-0 ceph-mon[75880]: pgmap v996: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:27 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v997: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:28 compute-0 ceph-mon[75880]: pgmap v997: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:29 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v998: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:30 compute-0 ceph-mon[75880]: pgmap v998: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:31 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v999: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:32 compute-0 ceph-mon[75880]: pgmap v999: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_21:01:32
Dec 01 21:01:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 21:01:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 21:01:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'volumes', 'images']
Dec 01 21:01:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1000: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:01:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:01:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:34 compute-0 ceph-mon[75880]: pgmap v1000: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:35 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1001: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:36 compute-0 podman[261322]: 2025-12-01 21:01:36.13848049 +0000 UTC m=+0.088065835 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 01 21:01:36 compute-0 ceph-mon[75880]: pgmap v1001: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:37 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1002: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:38 compute-0 ceph-mon[75880]: pgmap v1002: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:39 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1003: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:40 compute-0 podman[261343]: 2025-12-01 21:01:40.087902797 +0000 UTC m=+0.048177157 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:01:40 compute-0 podman[261344]: 2025-12-01 21:01:40.127168786 +0000 UTC m=+0.083846734 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 01 21:01:40 compute-0 ceph-mon[75880]: pgmap v1003: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:01:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:01:41 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1004: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:42 compute-0 ceph-mon[75880]: pgmap v1004: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:43 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1005: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:44 compute-0 ceph-mon[75880]: pgmap v1005: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:01:44.365 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:01:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:01:44.366 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:01:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:01:44.366 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:01:45 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1006: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:46 compute-0 ceph-mon[75880]: pgmap v1006: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:47 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1007: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:48 compute-0 ceph-mon[75880]: pgmap v1007: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:49 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1008: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:50 compute-0 ceph-mon[75880]: pgmap v1008: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:51 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1009: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:52 compute-0 ceph-mon[75880]: pgmap v1009: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:52 compute-0 sudo[261390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:01:52 compute-0 sudo[261390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:52 compute-0 sudo[261390]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:52 compute-0 sudo[261415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 21:01:52 compute-0 sudo[261415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:53 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1010: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:53 compute-0 sudo[261415]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:53 compute-0 sudo[261471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:01:53 compute-0 sudo[261471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:53 compute-0 sudo[261471]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:53 compute-0 sudo[261496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 01 21:01:53 compute-0 sudo[261496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:53 compute-0 sudo[261496]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 21:01:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 21:01:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:01:53 compute-0 sudo[261539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:01:53 compute-0 sudo[261539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:53 compute-0 sudo[261539]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:53 compute-0 sudo[261565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 21:01:53 compute-0 sudo[261565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:54 compute-0 podman[261604]: 2025-12-01 21:01:54.19245829 +0000 UTC m=+0.050853182 container create 8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 21:01:54 compute-0 systemd[1]: Started libpod-conmon-8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0.scope.
Dec 01 21:01:54 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:01:54 compute-0 podman[261604]: 2025-12-01 21:01:54.169826122 +0000 UTC m=+0.028220994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:01:54 compute-0 podman[261604]: 2025-12-01 21:01:54.27301117 +0000 UTC m=+0.131406052 container init 8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 21:01:54 compute-0 podman[261604]: 2025-12-01 21:01:54.279386649 +0000 UTC m=+0.137781541 container start 8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 01 21:01:54 compute-0 podman[261604]: 2025-12-01 21:01:54.283083455 +0000 UTC m=+0.141478337 container attach 8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:01:54 compute-0 optimistic_moser[261618]: 167 167
Dec 01 21:01:54 compute-0 systemd[1]: libpod-8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0.scope: Deactivated successfully.
Dec 01 21:01:54 compute-0 podman[261604]: 2025-12-01 21:01:54.2877156 +0000 UTC m=+0.146110492 container died 8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:01:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b192f7528aac9547166ab675ed1a8918c220ad4aa85fe49de75cff01779a2877-merged.mount: Deactivated successfully.
Dec 01 21:01:54 compute-0 podman[261604]: 2025-12-01 21:01:54.333228344 +0000 UTC m=+0.191623206 container remove 8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 01 21:01:54 compute-0 ceph-mon[75880]: pgmap v1010: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 21:01:54 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:01:54 compute-0 systemd[1]: libpod-conmon-8b42caf612edcec95b79b5271dc8c8da37b3abf2430d2648809969f9c0a8eeb0.scope: Deactivated successfully.
Dec 01 21:01:54 compute-0 podman[261642]: 2025-12-01 21:01:54.494952122 +0000 UTC m=+0.044286026 container create fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elion, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:01:54 compute-0 systemd[1]: Started libpod-conmon-fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63.scope.
Dec 01 21:01:54 compute-0 podman[261642]: 2025-12-01 21:01:54.477163036 +0000 UTC m=+0.026496960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:01:54 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3c72887a5d8982aaebb653e363f79ab7c41e516f1955ceb7b9c60789adc0f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3c72887a5d8982aaebb653e363f79ab7c41e516f1955ceb7b9c60789adc0f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3c72887a5d8982aaebb653e363f79ab7c41e516f1955ceb7b9c60789adc0f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3c72887a5d8982aaebb653e363f79ab7c41e516f1955ceb7b9c60789adc0f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3c72887a5d8982aaebb653e363f79ab7c41e516f1955ceb7b9c60789adc0f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:54 compute-0 podman[261642]: 2025-12-01 21:01:54.598518852 +0000 UTC m=+0.147852796 container init fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 01 21:01:54 compute-0 podman[261642]: 2025-12-01 21:01:54.607380229 +0000 UTC m=+0.156714133 container start fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elion, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 21:01:54 compute-0 podman[261642]: 2025-12-01 21:01:54.610806436 +0000 UTC m=+0.160140360 container attach fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 01 21:01:55 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:55 compute-0 cool_elion[261658]: --> passed data devices: 0 physical, 3 LVM
Dec 01 21:01:55 compute-0 cool_elion[261658]: --> All data devices are unavailable
Dec 01 21:01:55 compute-0 systemd[1]: libpod-fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63.scope: Deactivated successfully.
Dec 01 21:01:55 compute-0 podman[261642]: 2025-12-01 21:01:55.094117234 +0000 UTC m=+0.643451228 container died fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 01 21:01:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a3c72887a5d8982aaebb653e363f79ab7c41e516f1955ceb7b9c60789adc0f8-merged.mount: Deactivated successfully.
Dec 01 21:01:55 compute-0 podman[261642]: 2025-12-01 21:01:55.348720718 +0000 UTC m=+0.898054632 container remove fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 01 21:01:55 compute-0 systemd[1]: libpod-conmon-fde5b29b0d1d2d48d4fdb63ac89bee45e3ffa4d5c35ab0cdf300f360d77edd63.scope: Deactivated successfully.
Dec 01 21:01:55 compute-0 sudo[261565]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:55 compute-0 sudo[261692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:01:55 compute-0 sudo[261692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:55 compute-0 sudo[261692]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:55 compute-0 sudo[261717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 21:01:55 compute-0 sudo[261717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:55 compute-0 podman[261754]: 2025-12-01 21:01:55.809964955 +0000 UTC m=+0.038208766 container create a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 01 21:01:55 compute-0 systemd[1]: Started libpod-conmon-a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74.scope.
Dec 01 21:01:55 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:01:55 compute-0 podman[261754]: 2025-12-01 21:01:55.793442428 +0000 UTC m=+0.021686239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:01:55 compute-0 podman[261754]: 2025-12-01 21:01:55.903925825 +0000 UTC m=+0.132169686 container init a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 21:01:55 compute-0 podman[261754]: 2025-12-01 21:01:55.914386992 +0000 UTC m=+0.142630783 container start a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 01 21:01:55 compute-0 podman[261754]: 2025-12-01 21:01:55.917759598 +0000 UTC m=+0.146003429 container attach a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 01 21:01:55 compute-0 suspicious_meitner[261770]: 167 167
Dec 01 21:01:55 compute-0 systemd[1]: libpod-a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74.scope: Deactivated successfully.
Dec 01 21:01:55 compute-0 podman[261754]: 2025-12-01 21:01:55.921716012 +0000 UTC m=+0.149959843 container died a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 01 21:01:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-39553ff14ac82959c7f7969a8206cd222dc62a2f337a6438d70aa44413eef32c-merged.mount: Deactivated successfully.
Dec 01 21:01:55 compute-0 podman[261754]: 2025-12-01 21:01:55.971770157 +0000 UTC m=+0.200013988 container remove a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_meitner, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:01:55 compute-0 systemd[1]: libpod-conmon-a47db2f335bf8a3d6b224bfdd93ad07d08ca8d9f246c1e151ab886a8eca98a74.scope: Deactivated successfully.
Dec 01 21:01:56 compute-0 podman[261797]: 2025-12-01 21:01:56.209841444 +0000 UTC m=+0.055816147 container create 6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 01 21:01:56 compute-0 systemd[1]: Started libpod-conmon-6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9.scope.
Dec 01 21:01:56 compute-0 podman[261797]: 2025-12-01 21:01:56.185479991 +0000 UTC m=+0.031454774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:01:56 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81eef1e4c4272fe48034560601d5b3390fa135fa315e4b27d00ccf7907b5538d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81eef1e4c4272fe48034560601d5b3390fa135fa315e4b27d00ccf7907b5538d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81eef1e4c4272fe48034560601d5b3390fa135fa315e4b27d00ccf7907b5538d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81eef1e4c4272fe48034560601d5b3390fa135fa315e4b27d00ccf7907b5538d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:56 compute-0 podman[261797]: 2025-12-01 21:01:56.311106201 +0000 UTC m=+0.157080934 container init 6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 01 21:01:56 compute-0 podman[261797]: 2025-12-01 21:01:56.321072293 +0000 UTC m=+0.167047006 container start 6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_liskov, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:01:56 compute-0 podman[261797]: 2025-12-01 21:01:56.324620444 +0000 UTC m=+0.170595157 container attach 6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_liskov, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 21:01:56 compute-0 ceph-mon[75880]: pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]: {
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:     "0": [
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:         {
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "devices": [
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "/dev/loop3"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             ],
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_name": "ceph_lv0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_size": "21470642176",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "name": "ceph_lv0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "tags": {
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cluster_name": "ceph",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.crush_device_class": "",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.encrypted": "0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.objectstore": "bluestore",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osd_id": "0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.type": "block",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.vdo": "0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.with_tpm": "0"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             },
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "type": "block",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "vg_name": "ceph_vg0"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:         }
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:     ],
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:     "1": [
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:         {
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "devices": [
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "/dev/loop4"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             ],
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_name": "ceph_lv1",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_size": "21470642176",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "name": "ceph_lv1",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "tags": {
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cluster_name": "ceph",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.crush_device_class": "",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.encrypted": "0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.objectstore": "bluestore",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osd_id": "1",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.type": "block",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.vdo": "0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.with_tpm": "0"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             },
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "type": "block",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "vg_name": "ceph_vg1"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:         }
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:     ],
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:     "2": [
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:         {
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "devices": [
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "/dev/loop5"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             ],
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_name": "ceph_lv2",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_size": "21470642176",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "name": "ceph_lv2",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "tags": {
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.cluster_name": "ceph",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.crush_device_class": "",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.encrypted": "0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.objectstore": "bluestore",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osd_id": "2",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.type": "block",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.vdo": "0",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:                 "ceph.with_tpm": "0"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             },
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "type": "block",
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:             "vg_name": "ceph_vg2"
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:         }
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]:     ]
Dec 01 21:01:56 compute-0 mystifying_liskov[261814]: }
Dec 01 21:01:56 compute-0 systemd[1]: libpod-6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9.scope: Deactivated successfully.
Dec 01 21:01:56 compute-0 podman[261797]: 2025-12-01 21:01:56.614080488 +0000 UTC m=+0.460055231 container died 6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 21:01:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-81eef1e4c4272fe48034560601d5b3390fa135fa315e4b27d00ccf7907b5538d-merged.mount: Deactivated successfully.
Dec 01 21:01:56 compute-0 podman[261797]: 2025-12-01 21:01:56.657953691 +0000 UTC m=+0.503928404 container remove 6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:01:56 compute-0 systemd[1]: libpod-conmon-6da9770785a8b0dbc57d8b8a4d401b07e79a10655bf03721b89fc289110ee6c9.scope: Deactivated successfully.
Dec 01 21:01:56 compute-0 sudo[261717]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:56 compute-0 sudo[261834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:01:56 compute-0 sudo[261834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:56 compute-0 sudo[261834]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:56 compute-0 sudo[261859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 21:01:56 compute-0 sudo[261859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:57 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:57 compute-0 podman[261897]: 2025-12-01 21:01:57.166118426 +0000 UTC m=+0.048298062 container create 6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 01 21:01:57 compute-0 systemd[1]: Started libpod-conmon-6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320.scope.
Dec 01 21:01:57 compute-0 podman[261897]: 2025-12-01 21:01:57.146767511 +0000 UTC m=+0.028947167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:01:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:01:57 compute-0 podman[261897]: 2025-12-01 21:01:57.263207993 +0000 UTC m=+0.145387629 container init 6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hawking, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 21:01:57 compute-0 podman[261897]: 2025-12-01 21:01:57.271210504 +0000 UTC m=+0.153390140 container start 6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:01:57 compute-0 systemd[1]: libpod-6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320.scope: Deactivated successfully.
Dec 01 21:01:57 compute-0 podman[261897]: 2025-12-01 21:01:57.276326354 +0000 UTC m=+0.158505990 container attach 6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:01:57 compute-0 elated_hawking[261913]: 167 167
Dec 01 21:01:57 compute-0 conmon[261913]: conmon 6a07ec830db5c85b8567 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320.scope/container/memory.events
Dec 01 21:01:57 compute-0 podman[261897]: 2025-12-01 21:01:57.277470499 +0000 UTC m=+0.159650165 container died 6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 21:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-40dea3e9f3021c8fe21373d8c7852048b92b9d7f036c1de2ca06cf68b94ee3a7-merged.mount: Deactivated successfully.
Dec 01 21:01:57 compute-0 podman[261897]: 2025-12-01 21:01:57.323232161 +0000 UTC m=+0.205411807 container remove 6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:01:57 compute-0 systemd[1]: libpod-conmon-6a07ec830db5c85b8567bbfd0f2bd587525f25efacda5c68e57cd521e9441320.scope: Deactivated successfully.
Dec 01 21:01:57 compute-0 podman[261937]: 2025-12-01 21:01:57.525953352 +0000 UTC m=+0.061814404 container create 3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kalam, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 01 21:01:57 compute-0 systemd[1]: Started libpod-conmon-3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8.scope.
Dec 01 21:01:57 compute-0 podman[261937]: 2025-12-01 21:01:57.502931852 +0000 UTC m=+0.038792994 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:01:57 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657deea179fe5bcccd4d464da25bce20ef9d43618bb2c84fca0d41a6b779755e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657deea179fe5bcccd4d464da25bce20ef9d43618bb2c84fca0d41a6b779755e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657deea179fe5bcccd4d464da25bce20ef9d43618bb2c84fca0d41a6b779755e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657deea179fe5bcccd4d464da25bce20ef9d43618bb2c84fca0d41a6b779755e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:01:57 compute-0 podman[261937]: 2025-12-01 21:01:57.634565519 +0000 UTC m=+0.170426661 container init 3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 21:01:57 compute-0 podman[261937]: 2025-12-01 21:01:57.64033926 +0000 UTC m=+0.176200312 container start 3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kalam, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:01:57 compute-0 podman[261937]: 2025-12-01 21:01:57.643953872 +0000 UTC m=+0.179815024 container attach 3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 01 21:01:58 compute-0 lvm[262032]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 21:01:58 compute-0 lvm[262032]: VG ceph_vg0 finished
Dec 01 21:01:58 compute-0 lvm[262033]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 21:01:58 compute-0 lvm[262033]: VG ceph_vg1 finished
Dec 01 21:01:58 compute-0 lvm[262035]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 21:01:58 compute-0 lvm[262035]: VG ceph_vg2 finished
Dec 01 21:01:58 compute-0 ceph-mon[75880]: pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:58 compute-0 epic_kalam[261954]: {}
Dec 01 21:01:58 compute-0 systemd[1]: libpod-3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8.scope: Deactivated successfully.
Dec 01 21:01:58 compute-0 systemd[1]: libpod-3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8.scope: Consumed 1.359s CPU time.
Dec 01 21:01:58 compute-0 podman[261937]: 2025-12-01 21:01:58.483561925 +0000 UTC m=+1.019423027 container died 3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:01:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-657deea179fe5bcccd4d464da25bce20ef9d43618bb2c84fca0d41a6b779755e-merged.mount: Deactivated successfully.
Dec 01 21:01:58 compute-0 podman[261937]: 2025-12-01 21:01:58.530593927 +0000 UTC m=+1.066454979 container remove 3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kalam, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:01:58 compute-0 systemd[1]: libpod-conmon-3fca415849dd2dd990c2a6e9c94cc14caab9381dc9d46cbfba0c9ca5b596aeb8.scope: Deactivated successfully.
Dec 01 21:01:58 compute-0 sudo[261859]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 21:01:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:01:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 21:01:58 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:58 compute-0 sudo[262051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 21:01:58 compute-0 sudo[262051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:01:58 compute-0 sudo[262051]: pam_unix(sudo:session): session closed for user root
Dec 01 21:01:59 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:01:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:59 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:01:59 compute-0 ceph-mon[75880]: pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:00 compute-0 nova_compute[244568]: 2025-12-01 21:02:00.969 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:01 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:02 compute-0 ceph-mon[75880]: pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 21:02:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3287269873' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:02:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 21:02:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3287269873' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:02:02 compute-0 nova_compute[244568]: 2025-12-01 21:02:02.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:02 compute-0 nova_compute[244568]: 2025-12-01 21:02:02.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 21:02:02 compute-0 nova_compute[244568]: 2025-12-01 21:02:02.957 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 21:02:02 compute-0 nova_compute[244568]: 2025-12-01 21:02:02.971 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 21:02:02 compute-0 nova_compute[244568]: 2025-12-01 21:02:02.971 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:02 compute-0 nova_compute[244568]: 2025-12-01 21:02:02.971 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:02 compute-0 nova_compute[244568]: 2025-12-01 21:02:02.971 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 21:02:03 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3287269873' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:02:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3287269873' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:02:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:02:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:02:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:02:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:02:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:02:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:02:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:04 compute-0 ceph-mon[75880]: pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:04 compute-0 nova_compute[244568]: 2025-12-01 21:02:04.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:04 compute-0 nova_compute[244568]: 2025-12-01 21:02:04.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:05 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:06 compute-0 ceph-mon[75880]: pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:06 compute-0 nova_compute[244568]: 2025-12-01 21:02:06.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:06 compute-0 nova_compute[244568]: 2025-12-01 21:02:06.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:06 compute-0 nova_compute[244568]: 2025-12-01 21:02:06.989 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:02:06 compute-0 nova_compute[244568]: 2025-12-01 21:02:06.989 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:02:06 compute-0 nova_compute[244568]: 2025-12-01 21:02:06.989 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:02:06 compute-0 nova_compute[244568]: 2025-12-01 21:02:06.990 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 21:02:06 compute-0 nova_compute[244568]: 2025-12-01 21:02:06.990 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:02:07 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:07 compute-0 podman[262077]: 2025-12-01 21:02:07.137561762 +0000 UTC m=+0.082389887 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 01 21:02:07 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:02:07 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633459878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.588 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.755 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.756 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.756 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.757 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.808 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.808 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 21:02:07 compute-0 nova_compute[244568]: 2025-12-01 21:02:07.829 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:02:08 compute-0 ceph-mon[75880]: pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:08 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2633459878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:02:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:02:08 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737854259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:02:08 compute-0 nova_compute[244568]: 2025-12-01 21:02:08.356 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:02:08 compute-0 nova_compute[244568]: 2025-12-01 21:02:08.362 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 21:02:08 compute-0 nova_compute[244568]: 2025-12-01 21:02:08.379 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 21:02:08 compute-0 nova_compute[244568]: 2025-12-01 21:02:08.381 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 21:02:08 compute-0 nova_compute[244568]: 2025-12-01 21:02:08.382 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:02:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:09 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:09 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/737854259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:02:09 compute-0 nova_compute[244568]: 2025-12-01 21:02:09.383 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:02:10 compute-0 ceph-mon[75880]: pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:11 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:11 compute-0 podman[262141]: 2025-12-01 21:02:11.157252767 +0000 UTC m=+0.052297587 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 01 21:02:11 compute-0 podman[262142]: 2025-12-01 21:02:11.233227893 +0000 UTC m=+0.111235050 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 21:02:12 compute-0 ceph-mon[75880]: pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:13 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:14 compute-0 ceph-mon[75880]: pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:15 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:16 compute-0 ceph-mon[75880]: pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:02:16 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5601 writes, 23K keys, 5601 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5601 writes, 998 syncs, 5.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1244 writes, 3763 keys, 1244 commit groups, 1.0 writes per commit group, ingest: 2.08 MB, 0.00 MB/s
                                           Interval WAL: 1244 writes, 544 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 01 21:02:17 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:18 compute-0 ceph-mon[75880]: pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:19 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:20 compute-0 ceph-mon[75880]: pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:21 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:02:21 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6522 writes, 26K keys, 6522 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6522 writes, 1409 syncs, 4.63 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2010 writes, 5464 keys, 2010 commit groups, 1.0 writes per commit group, ingest: 2.73 MB, 0.00 MB/s
                                           Interval WAL: 2010 writes, 906 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 01 21:02:22 compute-0 ceph-mon[75880]: pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:23 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:24 compute-0 ceph-mon[75880]: pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:25 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:02:26 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5583 writes, 23K keys, 5583 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5583 writes, 995 syncs, 5.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1427 writes, 3716 keys, 1427 commit groups, 1.0 writes per commit group, ingest: 2.24 MB, 0.00 MB/s
                                           Interval WAL: 1427 writes, 625 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 01 21:02:26 compute-0 ceph-mon[75880]: pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:27 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:28 compute-0 ceph-mgr[76174]: [devicehealth INFO root] Check health
Dec 01 21:02:28 compute-0 ceph-mon[75880]: pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:29 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:30 compute-0 ceph-mon[75880]: pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:31 compute-0 rsyslogd[1006]: imjournal: 15344 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 01 21:02:31 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:31 compute-0 ceph-mon[75880]: pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_21:02:32
Dec 01 21:02:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 21:02:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 21:02:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'vms']
Dec 01 21:02:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:02:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:02:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:34 compute-0 ceph-mon[75880]: pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:35 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:36 compute-0 ceph-mon[75880]: pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:37 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:38 compute-0 podman[262187]: 2025-12-01 21:02:38.127623755 +0000 UTC m=+0.078997906 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 01 21:02:38 compute-0 ceph-mon[75880]: pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:39 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:40 compute-0 ceph-mon[75880]: pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:02:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:02:41 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:42 compute-0 podman[262208]: 2025-12-01 21:02:42.12989943 +0000 UTC m=+0.075729534 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 01 21:02:42 compute-0 podman[262209]: 2025-12-01 21:02:42.164922757 +0000 UTC m=+0.100374046 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 01 21:02:42 compute-0 ceph-mon[75880]: pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:43 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:44 compute-0 ceph-mon[75880]: pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:02:44.367 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:02:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:02:44.367 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:02:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:02:44.367 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:02:45 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:46 compute-0 ceph-mon[75880]: pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:47 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:48 compute-0 ceph-mon[75880]: pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:49 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:50 compute-0 ceph-mon[75880]: pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:51 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:52 compute-0 ceph-mon[75880]: pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:53 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:54 compute-0 ceph-mon[75880]: pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:55 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:56 compute-0 ceph-mon[75880]: pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:57 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:58 compute-0 ceph-mon[75880]: pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:02:58 compute-0 sudo[262255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:02:58 compute-0 sudo[262255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:02:58 compute-0 sudo[262255]: pam_unix(sudo:session): session closed for user root
Dec 01 21:02:58 compute-0 sudo[262280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 01 21:02:58 compute-0 sudo[262280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:02:59 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:02:59 compute-0 sudo[262280]: pam_unix(sudo:session): session closed for user root
Dec 01 21:02:59 compute-0 sudo[262337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:02:59 compute-0 sudo[262337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:02:59 compute-0 sudo[262337]: pam_unix(sudo:session): session closed for user root
Dec 01 21:02:59 compute-0 sudo[262362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- inventory --format=json-pretty --filter-for-batch
Dec 01 21:02:59 compute-0 sudo[262362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:02:59 compute-0 podman[262398]: 2025-12-01 21:02:59.988861829 +0000 UTC m=+0.056868143 container create b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 01 21:03:00 compute-0 systemd[1]: Started libpod-conmon-b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c.scope.
Dec 01 21:03:00 compute-0 podman[262398]: 2025-12-01 21:02:59.963485034 +0000 UTC m=+0.031491408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:00 compute-0 podman[262398]: 2025-12-01 21:03:00.087386655 +0000 UTC m=+0.155392969 container init b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 21:03:00 compute-0 podman[262398]: 2025-12-01 21:03:00.096635445 +0000 UTC m=+0.164641739 container start b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_gauss, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 01 21:03:00 compute-0 podman[262398]: 2025-12-01 21:03:00.09968425 +0000 UTC m=+0.167690544 container attach b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:03:00 compute-0 systemd[1]: libpod-b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c.scope: Deactivated successfully.
Dec 01 21:03:00 compute-0 jovial_gauss[262414]: 167 167
Dec 01 21:03:00 compute-0 conmon[262414]: conmon b11e31a37f82007df4c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c.scope/container/memory.events
Dec 01 21:03:00 compute-0 podman[262398]: 2025-12-01 21:03:00.104657377 +0000 UTC m=+0.172663711 container died b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_gauss, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ad370a86fca6f8869ff95ac41c51778a56d01cac5dfe13285118b5503b0574c-merged.mount: Deactivated successfully.
Dec 01 21:03:00 compute-0 podman[262398]: 2025-12-01 21:03:00.146379103 +0000 UTC m=+0.214385397 container remove b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 01 21:03:00 compute-0 systemd[1]: libpod-conmon-b11e31a37f82007df4c57ed90f09c43ef933a857ea2b660cce6a30bef46cd30c.scope: Deactivated successfully.
Dec 01 21:03:00 compute-0 ceph-mon[75880]: pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:00 compute-0 podman[262440]: 2025-12-01 21:03:00.390415687 +0000 UTC m=+0.071280633 container create 84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:03:00 compute-0 systemd[1]: Started libpod-conmon-84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d.scope.
Dec 01 21:03:00 compute-0 podman[262440]: 2025-12-01 21:03:00.3662303 +0000 UTC m=+0.047095226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:00 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f55e28c13c7aeb45ae87c9fee5fea0fd48ba00eab5c201e05daa6d95e1a34e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f55e28c13c7aeb45ae87c9fee5fea0fd48ba00eab5c201e05daa6d95e1a34e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f55e28c13c7aeb45ae87c9fee5fea0fd48ba00eab5c201e05daa6d95e1a34e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f55e28c13c7aeb45ae87c9fee5fea0fd48ba00eab5c201e05daa6d95e1a34e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:00 compute-0 podman[262440]: 2025-12-01 21:03:00.50699171 +0000 UTC m=+0.187856666 container init 84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 21:03:00 compute-0 podman[262440]: 2025-12-01 21:03:00.517470788 +0000 UTC m=+0.198335744 container start 84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 01 21:03:00 compute-0 podman[262440]: 2025-12-01 21:03:00.523409534 +0000 UTC m=+0.204274480 container attach 84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 01 21:03:01 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]: [
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:     {
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "available": false,
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "being_replaced": false,
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "ceph_device_lvm": false,
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "lsm_data": {},
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "lvs": [],
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "path": "/dev/sr0",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "rejected_reasons": [
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "Insufficient space (<5GB)",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "Has a FileSystem"
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         ],
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         "sys_api": {
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "actuators": null,
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "device_nodes": [
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:                 "sr0"
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             ],
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "devname": "sr0",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "human_readable_size": "482.00 KB",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "id_bus": "ata",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "model": "QEMU DVD-ROM",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "nr_requests": "2",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "parent": "/dev/sr0",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "partitions": {},
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "path": "/dev/sr0",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "removable": "1",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "rev": "2.5+",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "ro": "0",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "rotational": "1",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "sas_address": "",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "sas_device_handle": "",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "scheduler_mode": "mq-deadline",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "sectors": 0,
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "sectorsize": "2048",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "size": 493568.0,
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "support_discard": "2048",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "type": "disk",
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:             "vendor": "QEMU"
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:         }
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]:     }
Dec 01 21:03:01 compute-0 sad_varahamihira[262457]: ]
Dec 01 21:03:01 compute-0 systemd[1]: libpod-84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d.scope: Deactivated successfully.
Dec 01 21:03:01 compute-0 podman[262440]: 2025-12-01 21:03:01.127200487 +0000 UTC m=+0.808065433 container died 84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f55e28c13c7aeb45ae87c9fee5fea0fd48ba00eab5c201e05daa6d95e1a34e4-merged.mount: Deactivated successfully.
Dec 01 21:03:01 compute-0 podman[262440]: 2025-12-01 21:03:01.185342859 +0000 UTC m=+0.866207775 container remove 84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 21:03:01 compute-0 systemd[1]: libpod-conmon-84a6b329c378901d9a09181e39708ee78b02b172825e3160cf0266d9f199b64d.scope: Deactivated successfully.
Dec 01 21:03:01 compute-0 sudo[262362]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 21:03:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 21:03:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:03:01 compute-0 sudo[263268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:03:01 compute-0 sudo[263268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:03:01 compute-0 sudo[263268]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:01 compute-0 sudo[263293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 01 21:03:01 compute-0 sudo[263293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:03:01 compute-0 podman[263330]: 2025-12-01 21:03:01.778368076 +0000 UTC m=+0.047076725 container create e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 01 21:03:01 compute-0 systemd[1]: Started libpod-conmon-e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84.scope.
Dec 01 21:03:01 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:01 compute-0 podman[263330]: 2025-12-01 21:03:01.851152456 +0000 UTC m=+0.119861105 container init e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 01 21:03:01 compute-0 podman[263330]: 2025-12-01 21:03:01.757736979 +0000 UTC m=+0.026445648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:01 compute-0 podman[263330]: 2025-12-01 21:03:01.861690706 +0000 UTC m=+0.130399365 container start e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 01 21:03:01 compute-0 nostalgic_greider[263346]: 167 167
Dec 01 21:03:01 compute-0 podman[263330]: 2025-12-01 21:03:01.86599559 +0000 UTC m=+0.134704299 container attach e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:03:01 compute-0 systemd[1]: libpod-e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84.scope: Deactivated successfully.
Dec 01 21:03:01 compute-0 podman[263330]: 2025-12-01 21:03:01.867208609 +0000 UTC m=+0.135917278 container died e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_greider, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 01 21:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4627ae7994c87d1c1d325362b244c99ed7c4d5cd7bcad66a75bada7b3a01911a-merged.mount: Deactivated successfully.
Dec 01 21:03:01 compute-0 podman[263330]: 2025-12-01 21:03:01.906302833 +0000 UTC m=+0.175011472 container remove e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_greider, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 01 21:03:01 compute-0 systemd[1]: libpod-conmon-e1b3c2019169f90126bdae71d0f944538790aac69ebf2f823635e808c1663f84.scope: Deactivated successfully.
Dec 01 21:03:01 compute-0 nova_compute[244568]: 2025-12-01 21:03:01.952 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:02 compute-0 podman[263369]: 2025-12-01 21:03:02.203889365 +0000 UTC m=+0.122885180 container create 9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_franklin, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 01 21:03:02 compute-0 podman[263369]: 2025-12-01 21:03:02.115499026 +0000 UTC m=+0.034494821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:02 compute-0 ceph-mon[75880]: pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 01 21:03:02 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:03:02 compute-0 systemd[1]: Started libpod-conmon-9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b.scope.
Dec 01 21:03:02 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d309b397746453081b497ac578466537565bc309a4ad602a134961da3fcdbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d309b397746453081b497ac578466537565bc309a4ad602a134961da3fcdbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d309b397746453081b497ac578466537565bc309a4ad602a134961da3fcdbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d309b397746453081b497ac578466537565bc309a4ad602a134961da3fcdbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d309b397746453081b497ac578466537565bc309a4ad602a134961da3fcdbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:02 compute-0 podman[263369]: 2025-12-01 21:03:02.311085303 +0000 UTC m=+0.230081098 container init 9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 01 21:03:02 compute-0 podman[263369]: 2025-12-01 21:03:02.322717597 +0000 UTC m=+0.241713412 container start 9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_franklin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 21:03:02 compute-0 podman[263369]: 2025-12-01 21:03:02.326895569 +0000 UTC m=+0.245891434 container attach 9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 01 21:03:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 21:03:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3421483909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:03:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 21:03:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3421483909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:03:02 compute-0 adoring_franklin[263386]: --> passed data devices: 0 physical, 3 LVM
Dec 01 21:03:02 compute-0 adoring_franklin[263386]: --> All data devices are unavailable
Dec 01 21:03:02 compute-0 systemd[1]: libpod-9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b.scope: Deactivated successfully.
Dec 01 21:03:02 compute-0 podman[263369]: 2025-12-01 21:03:02.950360909 +0000 UTC m=+0.869356714 container died 9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 21:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6d309b397746453081b497ac578466537565bc309a4ad602a134961da3fcdbb-merged.mount: Deactivated successfully.
Dec 01 21:03:03 compute-0 podman[263369]: 2025-12-01 21:03:03.010041069 +0000 UTC m=+0.929036874 container remove 9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_franklin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:03:03 compute-0 systemd[1]: libpod-conmon-9ce21dde76af1d8b6975f533989749f8fe875ba087a5d971e770b0d29d6d0c3b.scope: Deactivated successfully.
Dec 01 21:03:03 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:03 compute-0 sudo[263293]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:03 compute-0 sudo[263419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:03:03 compute-0 sudo[263419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:03:03 compute-0 sudo[263419]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:03 compute-0 sudo[263444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- lvm list --format json
Dec 01 21:03:03 compute-0 sudo[263444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:03:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3421483909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:03:03 compute-0 ceph-mon[75880]: from='client.? 192.168.122.10:0/3421483909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 01 21:03:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:03:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:03:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:03:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:03:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:03:03 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:03:03 compute-0 podman[263480]: 2025-12-01 21:03:03.59203777 +0000 UTC m=+0.060221898 container create ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 21:03:03 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:03 compute-0 systemd[1]: Started libpod-conmon-ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689.scope.
Dec 01 21:03:03 compute-0 podman[263480]: 2025-12-01 21:03:03.566756908 +0000 UTC m=+0.034941046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:03 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:03 compute-0 podman[263480]: 2025-12-01 21:03:03.696326007 +0000 UTC m=+0.164510115 container init ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 01 21:03:03 compute-0 podman[263480]: 2025-12-01 21:03:03.709014234 +0000 UTC m=+0.177198332 container start ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 01 21:03:03 compute-0 podman[263480]: 2025-12-01 21:03:03.712348178 +0000 UTC m=+0.180532286 container attach ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 01 21:03:03 compute-0 elegant_easley[263496]: 167 167
Dec 01 21:03:03 compute-0 systemd[1]: libpod-ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689.scope: Deactivated successfully.
Dec 01 21:03:03 compute-0 podman[263480]: 2025-12-01 21:03:03.715007742 +0000 UTC m=+0.183191830 container died ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 21:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a2e32d9202ec72890747bf0ac778ab806d73d0b9cc38ce716b7ab4201e2137a-merged.mount: Deactivated successfully.
Dec 01 21:03:03 compute-0 podman[263480]: 2025-12-01 21:03:03.757570545 +0000 UTC m=+0.225754633 container remove ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 01 21:03:03 compute-0 systemd[1]: libpod-conmon-ccfbbef8d0ab094cfdf653a33e3f27c1cc3a62ec515d0ae78b191d3e55430689.scope: Deactivated successfully.
Dec 01 21:03:03 compute-0 podman[263520]: 2025-12-01 21:03:03.972948112 +0000 UTC m=+0.050957068 container create 9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_tesla, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 01 21:03:04 compute-0 systemd[1]: Started libpod-conmon-9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0.scope.
Dec 01 21:03:04 compute-0 podman[263520]: 2025-12-01 21:03:03.95277182 +0000 UTC m=+0.030780806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:04 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26ad61d678406ac311a3f80c651800d45144748b20a446652b5b3b4f75250d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26ad61d678406ac311a3f80c651800d45144748b20a446652b5b3b4f75250d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26ad61d678406ac311a3f80c651800d45144748b20a446652b5b3b4f75250d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26ad61d678406ac311a3f80c651800d45144748b20a446652b5b3b4f75250d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:04 compute-0 podman[263520]: 2025-12-01 21:03:04.087467459 +0000 UTC m=+0.165476425 container init 9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_tesla, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec 01 21:03:04 compute-0 podman[263520]: 2025-12-01 21:03:04.095288644 +0000 UTC m=+0.173297610 container start 9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_tesla, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:03:04 compute-0 podman[263520]: 2025-12-01 21:03:04.099715873 +0000 UTC m=+0.177724829 container attach 9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_tesla, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 01 21:03:04 compute-0 ceph-mon[75880]: pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]: {
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:     "0": [
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:         {
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "devices": [
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "/dev/loop3"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             ],
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_name": "ceph_lv0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_size": "21470642176",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f68c8d6d-1275-44aa-87ed-4bb7c5666585,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "name": "ceph_lv0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "tags": {
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.block_uuid": "u6DmKQ-6jlo-uVNn-SlQD-FUll-jTRu-BLKkeJ",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cluster_name": "ceph",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.crush_device_class": "",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.encrypted": "0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.objectstore": "bluestore",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osd_fsid": "f68c8d6d-1275-44aa-87ed-4bb7c5666585",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osd_id": "0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.type": "block",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.vdo": "0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.with_tpm": "0"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             },
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "type": "block",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "vg_name": "ceph_vg0"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:         }
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:     ],
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:     "1": [
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:         {
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "devices": [
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "/dev/loop4"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             ],
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_name": "ceph_lv1",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_size": "21470642176",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "name": "ceph_lv1",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "tags": {
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.block_uuid": "1iheTk-KTXf-BkRN-mtWb-63Vi-7FbJ-2ShQdR",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cluster_name": "ceph",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.crush_device_class": "",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.encrypted": "0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.objectstore": "bluestore",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osd_fsid": "97c6b5d8-9c2b-4c5b-aa46-d17369ed5d79",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osd_id": "1",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.type": "block",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.vdo": "0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.with_tpm": "0"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             },
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "type": "block",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "vg_name": "ceph_vg1"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:         }
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:     ],
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:     "2": [
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:         {
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "devices": [
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "/dev/loop5"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             ],
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_name": "ceph_lv2",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_size": "21470642176",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=dcf60a89-bba0-58b0-a1bf-d4bde723201b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c5b330ef-d0af-41ba-a172-fe530a921657,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "lv_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "name": "ceph_lv2",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "tags": {
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.block_uuid": "1cO7uw-SYUQ-pHNH-6jD3-8n7x-Amjk-XRrH8r",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cephx_lockbox_secret": "",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cluster_fsid": "dcf60a89-bba0-58b0-a1bf-d4bde723201b",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.cluster_name": "ceph",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.crush_device_class": "",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.encrypted": "0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.objectstore": "bluestore",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osd_fsid": "c5b330ef-d0af-41ba-a172-fe530a921657",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osd_id": "2",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.type": "block",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.vdo": "0",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:                 "ceph.with_tpm": "0"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             },
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "type": "block",
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:             "vg_name": "ceph_vg2"
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:         }
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]:     ]
Dec 01 21:03:04 compute-0 upbeat_tesla[263537]: }
Dec 01 21:03:04 compute-0 systemd[1]: libpod-9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0.scope: Deactivated successfully.
Dec 01 21:03:04 compute-0 podman[263520]: 2025-12-01 21:03:04.437335149 +0000 UTC m=+0.515344095 container died 9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e26ad61d678406ac311a3f80c651800d45144748b20a446652b5b3b4f75250d-merged.mount: Deactivated successfully.
Dec 01 21:03:04 compute-0 podman[263520]: 2025-12-01 21:03:04.482222365 +0000 UTC m=+0.560231321 container remove 9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_tesla, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 01 21:03:04 compute-0 systemd[1]: libpod-conmon-9d3bb2068343f680325ded5d01a0346a12933e145728bdeeb90a8d78017b14f0.scope: Deactivated successfully.
Dec 01 21:03:04 compute-0 sudo[263444]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:04 compute-0 sudo[263558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 01 21:03:04 compute-0 sudo[263558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:03:04 compute-0 sudo[263558]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:04 compute-0 sudo[263583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/dcf60a89-bba0-58b0-a1bf-d4bde723201b/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b -- raw list --format json
Dec 01 21:03:04 compute-0 sudo[263583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.958 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.958 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.978 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.978 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.979 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.979 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:04 compute-0 nova_compute[244568]: 2025-12-01 21:03:04.980 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 01 21:03:05 compute-0 podman[263621]: 2025-12-01 21:03:05.020055993 +0000 UTC m=+0.051983640 container create a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_haslett, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 01 21:03:05 compute-0 systemd[1]: Started libpod-conmon-a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715.scope.
Dec 01 21:03:05 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:05 compute-0 podman[263621]: 2025-12-01 21:03:04.997041132 +0000 UTC m=+0.028968779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:05 compute-0 podman[263621]: 2025-12-01 21:03:05.125370072 +0000 UTC m=+0.157297779 container init a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_haslett, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 01 21:03:05 compute-0 podman[263621]: 2025-12-01 21:03:05.137571314 +0000 UTC m=+0.169498931 container start a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 01 21:03:05 compute-0 podman[263621]: 2025-12-01 21:03:05.141507617 +0000 UTC m=+0.173435324 container attach a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 21:03:05 compute-0 eloquent_haslett[263637]: 167 167
Dec 01 21:03:05 compute-0 systemd[1]: libpod-a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715.scope: Deactivated successfully.
Dec 01 21:03:05 compute-0 podman[263621]: 2025-12-01 21:03:05.144011306 +0000 UTC m=+0.175938923 container died a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 21:03:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-173c7c976efcaa894fd993b357f35b8f7496239e0cdb12fccea394314785cb17-merged.mount: Deactivated successfully.
Dec 01 21:03:05 compute-0 podman[263621]: 2025-12-01 21:03:05.18405044 +0000 UTC m=+0.215978087 container remove a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_haslett, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 01 21:03:05 compute-0 systemd[1]: libpod-conmon-a98df0cb0aa168b2a9064456a626bef4f28c4efcaf0f4198b24cacbdcecf6715.scope: Deactivated successfully.
Dec 01 21:03:05 compute-0 podman[263660]: 2025-12-01 21:03:05.412227138 +0000 UTC m=+0.066778253 container create 37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 01 21:03:05 compute-0 systemd[1]: Started libpod-conmon-37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7.scope.
Dec 01 21:03:05 compute-0 podman[263660]: 2025-12-01 21:03:05.384124458 +0000 UTC m=+0.038675603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 01 21:03:05 compute-0 systemd[1]: Started libcrun container.
Dec 01 21:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7e964cc7c8cb5c182d6e2fc0010a3200e9cc0d1e7629deb78db1c3a82eb2aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7e964cc7c8cb5c182d6e2fc0010a3200e9cc0d1e7629deb78db1c3a82eb2aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7e964cc7c8cb5c182d6e2fc0010a3200e9cc0d1e7629deb78db1c3a82eb2aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7e964cc7c8cb5c182d6e2fc0010a3200e9cc0d1e7629deb78db1c3a82eb2aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 01 21:03:05 compute-0 podman[263660]: 2025-12-01 21:03:05.510419394 +0000 UTC m=+0.164970519 container init 37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_poitras, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 01 21:03:05 compute-0 podman[263660]: 2025-12-01 21:03:05.521549793 +0000 UTC m=+0.176100868 container start 37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 01 21:03:05 compute-0 podman[263660]: 2025-12-01 21:03:05.52531591 +0000 UTC m=+0.179867055 container attach 37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_poitras, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 01 21:03:06 compute-0 lvm[263755]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 21:03:06 compute-0 lvm[263755]: VG ceph_vg0 finished
Dec 01 21:03:06 compute-0 lvm[263757]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 21:03:06 compute-0 lvm[263757]: VG ceph_vg2 finished
Dec 01 21:03:06 compute-0 lvm[263756]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 21:03:06 compute-0 lvm[263756]: VG ceph_vg1 finished
Dec 01 21:03:06 compute-0 ceph-mon[75880]: pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:06 compute-0 mystifying_poitras[263676]: {}
Dec 01 21:03:06 compute-0 systemd[1]: libpod-37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7.scope: Deactivated successfully.
Dec 01 21:03:06 compute-0 systemd[1]: libpod-37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7.scope: Consumed 1.371s CPU time.
Dec 01 21:03:06 compute-0 podman[263660]: 2025-12-01 21:03:06.363244239 +0000 UTC m=+1.017795344 container died 37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 01 21:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-df7e964cc7c8cb5c182d6e2fc0010a3200e9cc0d1e7629deb78db1c3a82eb2aa-merged.mount: Deactivated successfully.
Dec 01 21:03:06 compute-0 podman[263660]: 2025-12-01 21:03:06.415829136 +0000 UTC m=+1.070380241 container remove 37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_poitras, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 01 21:03:06 compute-0 systemd[1]: libpod-conmon-37ea588ed15152b2f9015273e39737f57aca20ef6055101f1791f839df1223c7.scope: Deactivated successfully.
Dec 01 21:03:06 compute-0 sudo[263583]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 01 21:03:06 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:06 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 01 21:03:06 compute-0 ceph-mon[75880]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:06 compute-0 sudo[263774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 01 21:03:06 compute-0 sudo[263774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 01 21:03:06 compute-0 sudo[263774]: pam_unix(sudo:session): session closed for user root
Dec 01 21:03:06 compute-0 nova_compute[244568]: 2025-12-01 21:03:06.959 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:07 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:07 compute-0 ceph-mon[75880]: from='mgr.14122 192.168.122.100:0/456459771' entity='mgr.compute-0.xhvuzu' 
Dec 01 21:03:07 compute-0 nova_compute[244568]: 2025-12-01 21:03:07.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:08 compute-0 ceph-mon[75880]: pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:08 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:08 compute-0 nova_compute[244568]: 2025-12-01 21:03:08.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:08 compute-0 nova_compute[244568]: 2025-12-01 21:03:08.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:08 compute-0 nova_compute[244568]: 2025-12-01 21:03:08.987 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:03:08 compute-0 nova_compute[244568]: 2025-12-01 21:03:08.988 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:03:08 compute-0 nova_compute[244568]: 2025-12-01 21:03:08.988 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:03:08 compute-0 nova_compute[244568]: 2025-12-01 21:03:08.988 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 01 21:03:08 compute-0 nova_compute[244568]: 2025-12-01 21:03:08.989 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:03:09 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:09 compute-0 podman[263800]: 2025-12-01 21:03:09.146265528 +0000 UTC m=+0.096009788 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 01 21:03:09 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:03:09 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195198266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.605 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:03:09 compute-0 ceph-mon[75880]: pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:09 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2195198266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.818 244572 WARNING nova.virt.libvirt.driver [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.820 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5096MB free_disk=59.988265527412295GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.820 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.821 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.905 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.906 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 01 21:03:09 compute-0 nova_compute[244568]: 2025-12-01 21:03:09.926 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 01 21:03:10 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 01 21:03:10 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253580915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:03:10 compute-0 nova_compute[244568]: 2025-12-01 21:03:10.451 244572 DEBUG oslo_concurrency.processutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 01 21:03:10 compute-0 nova_compute[244568]: 2025-12-01 21:03:10.455 244572 DEBUG nova.compute.provider_tree [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed in ProviderTree for provider: 1adb778b-ac5d-48bb-abc3-c422b12ca516 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 01 21:03:10 compute-0 nova_compute[244568]: 2025-12-01 21:03:10.474 244572 DEBUG nova.scheduler.client.report [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Inventory has not changed for provider 1adb778b-ac5d-48bb-abc3-c422b12ca516 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 01 21:03:10 compute-0 nova_compute[244568]: 2025-12-01 21:03:10.475 244572 DEBUG nova.compute.resource_tracker [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 01 21:03:10 compute-0 nova_compute[244568]: 2025-12-01 21:03:10.475 244572 DEBUG oslo_concurrency.lockutils [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:03:10 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2253580915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 01 21:03:11 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:11 compute-0 nova_compute[244568]: 2025-12-01 21:03:11.471 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:11 compute-0 ceph-mon[75880]: pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:13 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:13 compute-0 podman[263864]: 2025-12-01 21:03:13.142450191 +0000 UTC m=+0.090722723 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 01 21:03:13 compute-0 podman[263863]: 2025-12-01 21:03:13.142535513 +0000 UTC m=+0.093476578 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 01 21:03:13 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:14 compute-0 ceph-mon[75880]: pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:15 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:16 compute-0 ceph-mon[75880]: pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:17 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:18 compute-0 ceph-mon[75880]: pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:18 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:19 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:20 compute-0 ceph-mon[75880]: pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:21 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:22 compute-0 ceph-mon[75880]: pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:23 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:23 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:24 compute-0 ceph-mon[75880]: pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:25 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:26 compute-0 ceph-mon[75880]: pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:27 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:27 compute-0 sshd-session[263912]: Accepted publickey for zuul from 192.168.122.10 port 60546 ssh2: ECDSA SHA256:6EhqTgodPOcyOWWmglACwrZpZJJZy33RNx+kqN6SZQQ
Dec 01 21:03:27 compute-0 systemd-logind[796]: New session 56 of user zuul.
Dec 01 21:03:27 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec 01 21:03:27 compute-0 sshd-session[263912]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 01 21:03:28 compute-0 sudo[263916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 01 21:03:28 compute-0 sudo[263916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 01 21:03:28 compute-0 sshd-session[263910]: Received disconnect from 80.94.93.233 port 49334:11:  [preauth]
Dec 01 21:03:28 compute-0 sshd-session[263910]: Disconnected from authenticating user root 80.94.93.233 port 49334 [preauth]
Dec 01 21:03:28 compute-0 ceph-mon[75880]: pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:28 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:29 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:29 compute-0 ceph-mon[75880]: pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:30 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15000 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:31 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:31 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15002 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:31 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 01 21:03:31 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036502004' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 21:03:32 compute-0 ceph-mon[75880]: from='client.15000 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:32 compute-0 ceph-mon[75880]: pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:32 compute-0 ceph-mon[75880]: from='client.15002 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:32 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/4036502004' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 01 21:03:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Optimize plan auto_2025-12-01_21:03:32
Dec 01 21:03:32 compute-0 ceph-mgr[76174]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 01 21:03:32 compute-0 ceph-mgr[76174]: [balancer INFO root] do_upmap
Dec 01 21:03:32 compute-0 ceph-mgr[76174]: [balancer INFO root] pools ['backups', 'volumes', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 01 21:03:32 compute-0 ceph-mgr[76174]: [balancer INFO root] prepared 0/10 upmap changes
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] scanning for idle connections..
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [volumes INFO mgr_util] cleaning up connections: []
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:03:33 compute-0 ceph-mgr[76174]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 01 21:03:33 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:34 compute-0 ceph-mon[75880]: pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:34 compute-0 ovs-vsctl[264179]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 01 21:03:35 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:35 compute-0 virtqemud[244294]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 01 21:03:35 compute-0 virtqemud[244294]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 01 21:03:35 compute-0 virtqemud[244294]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 01 21:03:36 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: cache status {prefix=cache status} (starting...)
Dec 01 21:03:36 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: client ls {prefix=client ls} (starting...)
Dec 01 21:03:36 compute-0 lvm[264518]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 01 21:03:36 compute-0 lvm[264518]: VG ceph_vg2 finished
Dec 01 21:03:36 compute-0 ceph-mon[75880]: pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:36 compute-0 lvm[264521]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 01 21:03:36 compute-0 lvm[264521]: VG ceph_vg0 finished
Dec 01 21:03:36 compute-0 lvm[264551]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 01 21:03:36 compute-0 lvm[264551]: VG ceph_vg1 finished
Dec 01 21:03:36 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15006 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:37 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: damage ls {prefix=damage ls} (starting...)
Dec 01 21:03:37 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:37 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump loads {prefix=dump loads} (starting...)
Dec 01 21:03:37 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 01 21:03:37 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15008 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:37 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 01 21:03:37 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 01 21:03:37 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 01 21:03:37 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 01 21:03:37 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896575640' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 01 21:03:37 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15012 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:37 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 01 21:03:38 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 01 21:03:38 compute-0 ceph-mon[75880]: from='client.15006 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:38 compute-0 ceph-mon[75880]: pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:38 compute-0 ceph-mon[75880]: from='client.15008 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:38 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2896575640' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 01 21:03:38 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: ops {prefix=ops} (starting...)
Dec 01 21:03:38 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15015 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:38 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: 2025-12-01T21:03:38.364+0000 7f311224f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 21:03:38 compute-0 ceph-mgr[76174]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 21:03:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 01 21:03:38 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3407641578' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:03:38 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 01 21:03:39 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996320341' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 01 21:03:39 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: session ls {prefix=session ls} (starting...)
Dec 01 21:03:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 01 21:03:39 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790390837' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 01 21:03:39 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:39 compute-0 ceph-mds[94156]: mds.cephfs.compute-0.pstuwl asok_command: status {prefix=status} (starting...)
Dec 01 21:03:39 compute-0 ceph-mon[75880]: from='client.15012 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:39 compute-0 ceph-mon[75880]: from='client.15015 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:39 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3407641578' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 01 21:03:39 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1996320341' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 01 21:03:39 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3790390837' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 01 21:03:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 01 21:03:39 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139094486' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 21:03:39 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 01 21:03:39 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139952099' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 01 21:03:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 01 21:03:40 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126331725' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 21:03:40 compute-0 podman[264902]: 2025-12-01 21:03:40.10811945 +0000 UTC m=+0.071531661 container health_status f0402124acd006047039e31ac8641c94c214c705ee2e6cdba8b35d2f7a4b3edc (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15028 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] _maybe_adjust
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.620943187075977e-07 of space, bias 1.0, pg target 7.86282956122793e-05 quantized to 32 (current 32)
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.119668419926072e-07 of space, bias 1.0, pg target 3.3590052597782157e-05 quantized to 32 (current 32)
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668574553212283 of space, bias 1.0, pg target 0.20057236596368488 quantized to 32 (current 32)
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8098511608610893e-06 of space, bias 4.0, pg target 0.0021718213930333073 quantized to 16 (current 16)
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 01 21:03:40 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 01 21:03:40 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663970558' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 21:03:40 compute-0 ceph-mon[75880]: pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:40 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3139094486' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 21:03:40 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3139952099' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 01 21:03:40 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1126331725' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 21:03:40 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15032 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:41 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 01 21:03:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779176907' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 01 21:03:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722926403' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 01 21:03:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1449937125' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: from='client.15028 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3663970558' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: from='client.15032 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:41 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/779176907' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/722926403' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1449937125' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 01 21:03:41 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 01 21:03:41 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550458490' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 01 21:03:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 01 21:03:42 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/342535383' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 21:03:42 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15044 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:42 compute-0 ceph-mgr[76174]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 01 21:03:42 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: 2025-12-01T21:03:42.341+0000 7f311224f640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 01 21:03:42 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15046 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:42 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2550458490' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 01 21:03:42 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/342535383' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 21:03:42 compute-0 ceph-mon[75880]: from='client.15044 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:42 compute-0 sshd-session[265182]: Received disconnect from 43.251.161.76 port 38074:11:  [preauth]
Dec 01 21:03:42 compute-0 sshd-session[265182]: Disconnected from authenticating user root 43.251.161.76 port 38074 [preauth]
Dec 01 21:03:42 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 01 21:03:42 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735797386' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007947 4 0.000138
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007890 4 0.000042
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007948 4 0.000036
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007916 4 0.000034
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007923 4 0.000059
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007882 4 0.000198
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007910 4 0.000045
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007894 4 0.000050
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007875 4 0.000039
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007838 4 0.000035
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007880 4 0.000041
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007817 4 0.000045
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007881 4 0.000073
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007804 4 0.000047
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007611 4 0.000202
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007793 4 0.000082
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007896 4 0.000081
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008058 4 0.000032
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006846 4 0.000057
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006798 4 0.000052
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006673 4 0.000054
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006813 4 0.000057
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006685 4 0.000059
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/25 lis/c=42/40 les/c/f=43/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007817 4 0.000063
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=37/19 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018910 7 0.000048
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018961 7 0.000076
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015844 7 0.000118
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019178 7 0.000079
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000065 1 0.000028
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018703 7 0.000035
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018568 7 0.000072
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000089 1 0.000011
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018856 7 0.000065
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000141 1 0.000011
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000181 1 0.000012
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000186 1 0.000036
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000242 1 0.000015
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000283 1 0.000088
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022983 7 0.000072
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021248 7 0.000031
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022953 7 0.000054
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022414 7 0.000043
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021718 7 0.000192
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021627 7 0.000035
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022228 7 0.000050
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021424 7 0.000034
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000106 1 0.000059
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022711 7 0.000107
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000147 1 0.000060
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008326 1 0.000054
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022524 7 0.000085
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.008491 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.023125 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000248 1 0.000078
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000235 1 0.000028
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000286 1 0.000043
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000400 1 0.000105
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022833 7 0.000039
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021776 7 0.000032
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024390 7 0.000159
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021996 7 0.000033
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022075 7 0.000031
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022119 7 0.000071
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000700 1 0.000020
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000796 1 0.000026
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001155 1 0.000057
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001208 1 0.000164
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000883 1 0.000026
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000978 1 0.000021
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000991 1 0.000013
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001045 1 0.000033
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022713 7 0.000058
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022641 7 0.000031
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022497 7 0.000103
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001048 1 0.000027
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001135 1 0.000076
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000151 1 0.000031
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000172 1 0.000016
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000167 1 0.000059
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.012048 1 0.000107
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.012280 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.026928 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.019003 1 0.000082
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.019358 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.033920 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026305 1 0.000031
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.026774 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.041375 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033725 1 0.000047
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034211 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.049090 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.041006 1 0.000026
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041515 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.054662 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.048316 1 0.000045
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.048868 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.062566 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.055829 1 0.000019
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.056406 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.069518 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.063035 1 0.000017
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.063632 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.077719 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.070438 1 0.000041
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.071010 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.084463 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.077909 1 0.000016
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.078535 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.092957 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.085155 1 0.000016
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085815 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.098890 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.092491 1 0.000017
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.093213 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.106295 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.099819 1 0.000015
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.100590 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.114670 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.107156 1 0.000015
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.107989 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.122254 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.114507 1 0.000015
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.115393 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.129087 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.121833 1 0.000013
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.122901 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.135785 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126594 1 0.000022
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.126682 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.145630 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.133929 1 0.000040
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.134061 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.153058 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.141340 1 0.000019
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.141506 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.157397 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.148655 1 0.000038
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.148861 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.168065 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.156001 1 0.000052
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.156223 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.174967 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.163298 1 0.000042
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.163575 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.182461 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.170647 1 0.000031
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.170965 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.189578 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.174034 1 0.000052
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.174162 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.197193 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.181300 1 0.000086
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.181493 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.202769 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.188587 1 0.000021
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.188857 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.211838 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.195952 1 0.000037
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.196221 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.218120 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.203380 1 0.000030
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.203700 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.225366 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.210718 1 0.000028
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.211150 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.233602 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.218010 1 0.000043
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.218767 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.240222 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.225060 1 0.000037
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.225885 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.248140 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.232122 1 0.000087
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233308 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.256058 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.239289 1 0.000052
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.240522 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.263118 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.246744 1 0.000072
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.247692 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.270559 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.254134 1 0.000023
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.255153 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276958 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.261378 1 0.000019
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.262400 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.284418 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.268793 1 0.000018
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.269867 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.294315 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.276421 1 0.000079
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.277549 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/18 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.299718 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.283623 1 0.000021
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.284796 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.306922 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.290775 1 0.000018
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.290950 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.313699 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.298146 1 0.000015
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.298339 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.321000 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.305530 1 0.000015
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.305740 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.328306 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 1015808 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:21.414583+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 43 handle_osd_map epochs [43,44], i have 43, src has [1,44]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 308916 data_alloc: 218103808 data_used: 593
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 983040 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:22.414736+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:52.375886+0000 osd.2 (osd.2) 14 : cluster [DBG] 5.1c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:52.386469+0000 osd.2 (osd.2) 15 : cluster [DBG] 5.1c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 15)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:52.375886+0000 osd.2 (osd.2) 14 : cluster [DBG] 5.1c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:52.386469+0000 osd.2 (osd.2) 15 : cluster [DBG] 5.1c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 909312 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:23.414977+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:53.360950+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:53.371540+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 17)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:53.360950+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:53.371540+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 901120 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.157582283s of 10.313741684s, submitted: 326
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:24.415244+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:54.361103+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.1f scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:54.371654+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.1f scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 19)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:54.361103+0000 osd.2 (osd.2) 18 : cluster [DBG] 5.1f scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:54.371654+0000 osd.2 (osd.2) 19 : cluster [DBG] 5.1f scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 44 heartbeat osd_stat(store_statfs(0x4fe159000/0x0/0x4ffc00000, data 0x2d9e1/0x71000, compress 0x0/0x0/0x0, omap 0x4b03, meta 0x1a2b4fd), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 44 handle_osd_map epochs [45,46], i have 44, src has [1,46]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 44 handle_osd_map epochs [45,46], i have 46, src has [1,46]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 884736 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe159000/0x0/0x4ffc00000, data 0x2d9e1/0x71000, compress 0x0/0x0/0x0, omap 0x4b03, meta 0x1a2b4fd), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:25.415434+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 876544 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:26.415684+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:56.362671+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.10 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:33:56.373267+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.10 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 21)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:56.362671+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.10 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:33:56.373267+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.10 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 326852 data_alloc: 218103808 data_used: 593
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 868352 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:27.415973+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 778240 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:28.416133+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 770048 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 48 heartbeat osd_stat(store_statfs(0x4fe149000/0x0/0x4ffc00000, data 0x330ab/0x7d000, compress 0x0/0x0/0x0, omap 0x52a4, meta 0x1a2ad5c), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 48 handle_osd_map epochs [49,50], i have 48, src has [1,50]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 48 handle_osd_map epochs [49,49], i have 50, src has [1,49]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:29.416244+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 761856 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:30.416382+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 50 heartbeat osd_stat(store_statfs(0x4fe141000/0x0/0x4ffc00000, data 0x35b41/0x83000, compress 0x0/0x0/0x0, omap 0x552f, meta 0x1a2aad1), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 753664 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:31.416593+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 336528 data_alloc: 218103808 data_used: 593
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 753664 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:32.416845+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 50 handle_osd_map epochs [51,52], i have 50, src has [1,52]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8(unlocked)] enter Initial
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=0 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000129 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=0 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000043
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000294 1 0.000075
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001174 2 0.000069
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 52 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 52 heartbeat osd_stat(store_statfs(0x4fe141000/0x0/0x4ffc00000, data 0x35b41/0x83000, compress 0x0/0x0/0x0, omap 0x552f, meta 0x1a2aad1), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 679936 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:33.417028+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:03.292070+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:03.302563+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 52 handle_osd_map epochs [52,53], i have 53, src has [1,53]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011456 2 0.000076
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013011 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002856 3 0.000463
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=52/53 n=1 ec=39/23 lis/c=52/39 les/c/f=53/41/0 sis=52) [2] r=0 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 23)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:03.292070+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:03.302563+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe141000/0x0/0x4ffc00000, data 0x3876d/0x89000, compress 0x0/0x0/0x0, omap 0x57ba, meta 0x1a2a846), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 671744 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:34.417255+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:04.263906+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.12 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:04.274430+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.12 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 25)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:04.263906+0000 osd.2 (osd.2) 24 : cluster [DBG] 2.12 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:04.274430+0000 osd.2 (osd.2) 25 : cluster [DBG] 2.12 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x39bed/0x8c000, compress 0x0/0x0/0x0, omap 0x5a45, meta 0x1a2a5bb), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 663552 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:35.417467+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 663552 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:36.417702+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 348824 data_alloc: 218103808 data_used: 593
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 663552 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.802775383s of 12.856064796s, submitted: 19
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:37.417891+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:07.217368+0000 osd.2 (osd.2) 26 : cluster [DBG] 2.10 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:07.227900+0000 osd.2 (osd.2) 27 : cluster [DBG] 2.10 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 27)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:07.217368+0000 osd.2 (osd.2) 26 : cluster [DBG] 2.10 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:07.227900+0000 osd.2 (osd.2) 27 : cluster [DBG] 2.10 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 630784 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:38.418127+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 622592 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:39.418477+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:09.230046+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.17 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:09.240643+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.17 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 29)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:09.230046+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.17 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:09.240643+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.17 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1662976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:40.418911+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:10.204372+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:10.214848+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 54 handle_osd_map epochs [55,56], i have 54, src has [1,56]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 31)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:10.204372+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:10.214848+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 56 heartbeat osd_stat(store_statfs(0x4fe13d000/0x0/0x4ffc00000, data 0x3b203/0x8f000, compress 0x0/0x0/0x0, omap 0x5cd0, meta 0x1a2a330), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 1597440 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:41.419175+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 364937 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 1695744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:42.419332+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 1654784 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:43.419575+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 1646592 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:44.419763+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:14.222545+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:14.233145+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 33)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:14.222545+0000 osd.2 (osd.2) 32 : cluster [DBG] 2.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:14.233145+0000 osd.2 (osd.2) 33 : cluster [DBG] 2.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe12b000/0x0/0x4ffc00000, data 0x4072f/0x9b000, compress 0x0/0x0/0x0, omap 0x6471, meta 0x1a29b8f), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 58 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 1744896 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:45.420043+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:15.202976+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:15.213600+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 35)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:15.202976+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:15.213600+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 1695744 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:46.420301+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:16.170714+0000 osd.2 (osd.2) 36 : cluster [DBG] 2.c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:16.181252+0000 osd.2 (osd.2) 37 : cluster [DBG] 2.c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe12e000/0x0/0x4ffc00000, data 0x41d4d/0x9e000, compress 0x0/0x0/0x0, omap 0x66fc, meta 0x1a29904), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 379334 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 37)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:16.170714+0000 osd.2 (osd.2) 36 : cluster [DBG] 2.c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:16.181252+0000 osd.2 (osd.2) 37 : cluster [DBG] 2.c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 1679360 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:47.420516+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 60 heartbeat osd_stat(store_statfs(0x4fe129000/0x0/0x4ffc00000, data 0x43363/0xa1000, compress 0x0/0x0/0x0, omap 0x6987, meta 0x1a29679), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.860921860s of 10.903360367s, submitted: 18
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 1671168 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:48.420730+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:18.120793+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:18.131245+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 39)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:18.120793+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:18.131245+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1662976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:49.420949+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:19.114225+0000 osd.2 (osd.2) 40 : cluster [DBG] 2.0 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:19.124779+0000 osd.2 (osd.2) 41 : cluster [DBG] 2.0 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 41)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:19.114225+0000 osd.2 (osd.2) 40 : cluster [DBG] 2.0 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:19.124779+0000 osd.2 (osd.2) 41 : cluster [DBG] 2.0 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 1662976 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:50.421206+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 60 handle_osd_map epochs [62,63], i have 60, src has [1,63]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 60 handle_osd_map epochs [61,63], i have 60, src has [1,63]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 499712 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:51.421341+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 396828 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 450560 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:52.421466+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:22.061458+0000 osd.2 (osd.2) 42 : cluster [DBG] 5.0 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:22.071999+0000 osd.2 (osd.2) 43 : cluster [DBG] 5.0 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f(unlocked)] enter Initial
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=0 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=0 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000023
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000107 1 0.000039
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001271 2 0.000039
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 63 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 43)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:22.061458+0000 osd.2 (osd.2) 42 : cluster [DBG] 5.0 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:22.071999+0000 osd.2 (osd.2) 43 : cluster [DBG] 5.0 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 handle_osd_map epochs [64,64], i have 64, src has [1,64]
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.410014 2 0.000165
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.411495 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001976 3 0.000173
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000135 1 0.000108
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.137404 3 0.000039
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=63/64 n=1 ec=39/23 lis/c=63/45 les/c/f=64/46/0 sis=63) [2] r=0 lpr=63 pi=[45,63)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 368640 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:53.421686+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11c000/0x0/0x4ffc00000, data 0x488a5/0xae000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 352256 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:54.421806+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:24.029842+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:24.040451+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 45)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:24.029842+0000 osd.2 (osd.2) 44 : cluster [DBG] 2.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:24.040451+0000 osd.2 (osd.2) 45 : cluster [DBG] 2.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:55.422092+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 344064 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:56.422239+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409900 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 335872 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:57.422460+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:27.019687+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.6 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:27.030218+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.6 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 47)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:27.019687+0000 osd.2 (osd.2) 46 : cluster [DBG] 5.6 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:27.030218+0000 osd.2 (osd.2) 47 : cluster [DBG] 5.6 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 303104 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:58.422690+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 303104 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:59.422871+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.782981873s of 11.941884995s, submitted: 20
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 303104 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:00.423168+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:30.062666+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:30.072779+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 49)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:30.062666+0000 osd.2 (osd.2) 48 : cluster [DBG] 5.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:30.072779+0000 osd.2 (osd.2) 49 : cluster [DBG] 5.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:01.423474+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413714 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:02.423683+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:03.423835+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:04.423999+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:05.424275+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:06.424467+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 413714 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 262144 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:07.424633+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:37.163610+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.d scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:37.174213+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.d scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 51)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:37.163610+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.d scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:37.174213+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.d scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:08.424845+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:38.126769+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.1b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:38.137375+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.1b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 53)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:38.126769+0000 osd.2 (osd.2) 52 : cluster [DBG] 5.1b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:38.137375+0000 osd.2 (osd.2) 53 : cluster [DBG] 5.1b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:09.425070+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:39.082620+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:39.093153+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.030481339s of 10.046459198s, submitted: 8
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 55)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:39.082620+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:39.093153+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:10.425312+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:40.109061+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.1b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:40.119614+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.1b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 57)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:40.109061+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.1b scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:40.119614+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.1b scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:11.425599+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423364 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:12.425778+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:13.425945+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:14.426089+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 188416 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:15.426285+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:45.078921+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:45.089433+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 59)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:45.078921+0000 osd.2 (osd.2) 58 : cluster [DBG] 4.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:45.089433+0000 osd.2 (osd.2) 59 : cluster [DBG] 4.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:16.426557+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425777 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 180224 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:17.426729+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 172032 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:18.426895+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:48.119043+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:48.129594+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 61)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:48.119043+0000 osd.2 (osd.2) 60 : cluster [DBG] 4.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:48.129594+0000 osd.2 (osd.2) 61 : cluster [DBG] 4.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 172032 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:19.427111+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.991535187s of 10.001205444s, submitted: 6
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:20.427235+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:50.110398+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.7 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:50.120998+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.7 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 63)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:50.110398+0000 osd.2 (osd.2) 62 : cluster [DBG] 3.7 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:50.120998+0000 osd.2 (osd.2) 63 : cluster [DBG] 3.7 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 147456 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:21.427439+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430599 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:22.427566+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:23.427691+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:24.427922+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:54.137839+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.18 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:54.148419+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.18 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 65)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:54.137839+0000 osd.2 (osd.2) 64 : cluster [DBG] 4.18 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:54.148419+0000 osd.2 (osd.2) 65 : cluster [DBG] 4.18 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:25.428247+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:55.122167+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.2 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:55.132751+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.2 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 67)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:55.122167+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.2 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:55.132751+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.2 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:26.428550+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435423 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 106496 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:27.428845+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:57.132067+0000 osd.2 (osd.2) 68 : cluster [DBG] 7.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:57.142632+0000 osd.2 (osd.2) 69 : cluster [DBG] 7.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 69)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:57.132067+0000 osd.2 (osd.2) 68 : cluster [DBG] 7.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:57.142632+0000 osd.2 (osd.2) 69 : cluster [DBG] 7.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:28.429247+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:29.429476+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:59.196990+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:34:59.207500+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 71)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:59.196990+0000 osd.2 (osd.2) 70 : cluster [DBG] 4.1 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:34:59.207500+0000 osd.2 (osd.2) 71 : cluster [DBG] 4.1 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:30.429775+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:31.430015+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440245 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.937257767s of 12.093973160s, submitted: 10
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:32.430211+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:02.204453+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:02.215040+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 73)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:02.204453+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:02.215040+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 73728 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:33.430429+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:03.239660+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.5 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:03.250240+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.5 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 75)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:03.239660+0000 osd.2 (osd.2) 74 : cluster [DBG] 7.5 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:03.250240+0000 osd.2 (osd.2) 75 : cluster [DBG] 7.5 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 57344 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:34.430652+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:04.234703+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.5 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:04.245243+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.5 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 77)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:04.234703+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.5 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:04.245243+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.5 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:35.430879+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:05.224808+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:05.235378+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 79)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:05.224808+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:05.235378+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 49152 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:36.431077+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 449889 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 40960 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:37.431225+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:07.277722+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:07.288298+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 81)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:07.277722+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:07.288298+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:38.431393+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:39.431603+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:40.431795+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:41.431959+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:11.296957+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:11.307478+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 83)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:11.296957+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:11.307478+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454711 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.079680443s of 10.133202553s, submitted: 12
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:42.432130+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:12.337689+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:12.348217+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 85)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:12.337689+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:12.348217+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:43.432326+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:44.432529+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:45.432707+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:46.432873+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 457124 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:47.433083+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:48.433306+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:18.278435+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.15 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:18.289054+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.15 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 87)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:18.278435+0000 osd.2 (osd.2) 86 : cluster [DBG] 7.15 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:18.289054+0000 osd.2 (osd.2) 87 : cluster [DBG] 7.15 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:49.433615+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:50.433894+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:51.434127+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 459537 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:52.434271+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.807907104s of 10.841655731s, submitted: 4
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:53.434503+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:23.179388+0000 osd.2 (osd.2) 88 : cluster [DBG] 4.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:23.190007+0000 osd.2 (osd.2) 89 : cluster [DBG] 4.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 89)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:23.179388+0000 osd.2 (osd.2) 88 : cluster [DBG] 4.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:23.190007+0000 osd.2 (osd.2) 89 : cluster [DBG] 4.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:54.434863+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:55.435019+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:56.435220+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:26.189468+0000 osd.2 (osd.2) 90 : cluster [DBG] 4.13 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:26.200069+0000 osd.2 (osd.2) 91 : cluster [DBG] 4.13 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 91)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:26.189468+0000 osd.2 (osd.2) 90 : cluster [DBG] 4.13 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:26.200069+0000 osd.2 (osd.2) 91 : cluster [DBG] 4.13 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464363 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:57.435444+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:58.435609+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 966656 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:59.435734+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:00.435883+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 958464 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:01.436149+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 950272 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464363 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:02.436234+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:03.436392+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:04.436528+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 917504 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.918931961s of 11.925214767s, submitted: 4
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:05.436752+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:35.104582+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.16 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:35.115127+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.16 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:06.436976+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 93)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:35.104582+0000 osd.2 (osd.2) 92 : cluster [DBG] 3.16 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:35.115127+0000 osd.2 (osd.2) 93 : cluster [DBG] 3.16 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 466776 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:07.437157+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:08.437237+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:38.090955+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:38.101522+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 95)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:38.090955+0000 osd.2 (osd.2) 94 : cluster [DBG] 3.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:38.101522+0000 osd.2 (osd.2) 95 : cluster [DBG] 3.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:09.437417+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:39.057932+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:39.068546+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 868352 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 97)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:39.057932+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.e scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:39.068546+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.e scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:10.437628+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:40.058947+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:40.069522+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 851968 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 99)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:40.058947+0000 osd.2 (osd.2) 98 : cluster [DBG] 7.c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:40.069522+0000 osd.2 (osd.2) 99 : cluster [DBG] 7.c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:11.437821+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:41.087741+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:41.098430+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476422 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 101)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:41.087741+0000 osd.2 (osd.2) 100 : cluster [DBG] 3.11 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:41.098430+0000 osd.2 (osd.2) 101 : cluster [DBG] 3.11 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:12.438065+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:13.438208+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:14.438334+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:44.101466+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.1c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:44.112205+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.1c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.202469826s of 10.040797234s, submitted: 12
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:15.438556+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 4 last_log 105 sent 103 num 4 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:45.145462+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:45.156059+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 103)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:44.101466+0000 osd.2 (osd.2) 102 : cluster [DBG] 7.1c scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:44.112205+0000 osd.2 (osd.2) 103 : cluster [DBG] 7.1c scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:16.438754+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 105)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:45.145462+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.1a scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:45.156059+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.1a scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481248 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:17.438911+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:47.154535+0000 osd.2 (osd.2) 106 : cluster [DBG] 3.1d scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:47.165086+0000 osd.2 (osd.2) 107 : cluster [DBG] 3.1d scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:18.439063+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 4 last_log 109 sent 107 num 4 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:48.137435+0000 osd.2 (osd.2) 108 : cluster [DBG] 3.18 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:48.147997+0000 osd.2 (osd.2) 109 : cluster [DBG] 3.18 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 107)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:47.154535+0000 osd.2 (osd.2) 106 : cluster [DBG] 3.1d scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:47.165086+0000 osd.2 (osd.2) 107 : cluster [DBG] 3.1d scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:19.439224+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 109)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:48.137435+0000 osd.2 (osd.2) 108 : cluster [DBG] 3.18 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:48.147997+0000 osd.2 (osd.2) 109 : cluster [DBG] 3.18 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:20.439390+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:21.439587+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486074 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:22.439846+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:23.439974+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:24.440271+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.685177803s of 10.026976585s, submitted: 6
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:25.440468+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:55.172445+0000 osd.2 (osd.2) 110 : cluster [DBG] 6.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:55.183044+0000 osd.2 (osd.2) 111 : cluster [DBG] 6.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 111)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:55.172445+0000 osd.2 (osd.2) 110 : cluster [DBG] 6.8 scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:55.183044+0000 osd.2 (osd.2) 111 : cluster [DBG] 6.8 scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:26.440732+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 488485 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:27.440983+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:57.153674+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.f scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  will send 2025-12-01T20:35:57.174867+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.f scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client handle_log_ack log(last 113)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:57.153674+0000 osd.2 (osd.2) 112 : cluster [DBG] 6.f scrub starts
Dec 01 21:03:42 compute-0 ceph-osd[88745]: log_client  logged 2025-12-01T20:35:57.174867+0000 osd.2 (osd.2) 113 : cluster [DBG] 6.f scrub ok
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:28.441333+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:29.441600+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:30.441829+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:31.442016+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:32.442246+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:33.442447+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:34.442678+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:35.442913+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:36.443107+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:37.443323+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:38.443442+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:39.443621+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 696320 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:40.443864+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:41.444092+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 688128 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:42.444307+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:43.444518+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:44.444739+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:45.444983+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:46.445144+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:47.445416+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:48.445625+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:49.445951+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:50.446232+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:51.446444+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:52.446634+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:53.446804+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:54.446983+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:55.447247+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:56.447522+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:57.447775+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:58.447912+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 589824 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:59.448223+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:00.448558+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:01.448765+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 573440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:02.448985+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 573440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:03.449153+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:04.449334+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:05.449518+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:06.449848+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:07.450341+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:08.450621+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 540672 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:09.450854+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:10.451139+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:11.451515+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:12.451791+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:13.452034+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:14.452274+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:15.452598+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:16.452817+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:17.452962+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:18.453083+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:19.453217+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 499712 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:20.453384+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:21.453543+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:22.453661+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:23.453803+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:24.454003+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:25.454202+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:26.454362+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:27.454494+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:28.454618+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:29.454790+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:30.455039+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:31.455216+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:32.455359+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:33.455546+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:34.455789+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:35.456058+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:36.456390+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 417792 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:37.456602+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:38.456785+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:39.457067+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:40.457319+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:41.457562+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:42.457842+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 385024 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:43.458112+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:44.458390+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:45.458635+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:46.458948+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:47.459286+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:48.459484+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:49.459713+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 344064 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:50.459947+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:51.460147+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 335872 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:52.460424+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:53.460712+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:54.460916+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:55.461100+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:56.461293+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:57.461517+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:58.461731+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:59.461903+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 311296 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:00.462147+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:01.462358+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:02.462542+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:03.462713+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:04.462914+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:05.463144+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:06.463302+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:07.463466+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:08.463648+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:09.463806+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 237568 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:10.464407+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:11.464550+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:12.464693+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:13.464837+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:14.464992+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:15.465117+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:16.465251+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:17.465354+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:18.465532+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:19.465678+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:20.466693+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:21.467395+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:22.467565+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63758336 unmapped: 196608 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:23.467816+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:24.468065+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63782912 unmapped: 172032 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:25.468305+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:26.468808+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:27.469499+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:28.469949+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:29.470239+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:30.470507+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:31.470780+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:32.470990+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:33.471229+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:34.471415+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:35.471691+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:36.471885+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:37.472078+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:38.472277+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:39.472504+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:40.472814+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:41.473289+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:42.473824+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:43.474335+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:44.474728+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:45.475052+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:46.475470+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:47.475651+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:48.475819+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:49.476008+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:50.476238+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:51.476459+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:52.476697+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:53.476923+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:54.477142+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:55.477358+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:56.477513+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:57.477735+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:58.477908+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:59.478147+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:00.478424+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:01.478737+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:02.478938+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:03.479103+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:04.479297+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:05.479489+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:06.479748+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:07.479960+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:08.480163+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:09.480378+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:10.480610+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:11.480977+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:12.481307+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:13.481527+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:14.481713+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:15.481924+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:16.482151+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:17.482376+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:18.482544+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:19.482713+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:20.482919+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:21.483049+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:22.483161+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:23.483240+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 1015808 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:24.483369+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:25.483518+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:26.483607+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:27.483781+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:28.483942+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:29.484144+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:30.484395+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:31.484522+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:32.484697+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:33.484858+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:34.484994+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:35.485219+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:36.485404+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:37.485603+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:38.485867+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:39.486102+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:40.486276+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:41.486445+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:42.486625+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:43.486872+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:44.487299+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:45.487754+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:46.488016+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:47.488218+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:48.488385+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:49.488626+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:50.488924+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:51.489164+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:52.489433+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:53.489613+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:54.490083+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:55.490298+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:56.490543+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:57.490725+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:58.490897+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:59.491085+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:00.491338+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:01.491596+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:02.491828+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:03.492159+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:04.492377+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:05.492597+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:06.492895+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:07.493132+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:08.493347+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:09.493531+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:10.493722+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:11.493879+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:12.494061+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:13.494203+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:14.494343+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:15.494474+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:16.494596+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:17.494763+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:18.494928+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:19.495087+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:20.495275+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:21.495423+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:22.495577+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:23.495698+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:24.495855+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:25.496022+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:26.496195+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:27.496367+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:28.496547+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:29.496771+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:30.496955+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:31.497119+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:32.497290+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:33.497462+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:34.498037+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:35.498203+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:36.498341+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:37.498786+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:38.499165+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:39.499363+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:40.499566+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:41.499779+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:42.500088+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:43.500356+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:44.500576+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:45.500917+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:46.501313+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:47.501558+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:48.501752+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:49.502007+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:50.502274+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:51.502485+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:52.502717+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:53.502922+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:54.503109+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:55.503265+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:56.503428+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:57.503649+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:58.503892+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:59.504230+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:00.504453+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:01.504683+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:02.504879+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:03.505094+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:04.505330+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:05.505500+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:06.505742+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:07.505924+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:08.506076+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:09.506157+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:10.506370+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:11.506497+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:12.506667+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:13.506802+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:14.506956+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:15.507045+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:16.507233+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:17.507320+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:18.507445+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:19.507526+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:20.507658+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:21.527305+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:22.527462+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:23.527669+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:24.527841+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:25.528098+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:26.528343+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:27.528571+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:28.528812+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:29.529045+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:30.529293+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:31.529475+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:32.529625+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:33.529778+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:34.529950+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:35.530091+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:36.530244+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:37.530935+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:38.531596+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:39.531746+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:40.531899+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:41.532074+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:42.532412+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:43.532538+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:44.532653+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:45.532805+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:46.532946+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:47.533083+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:48.533230+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:49.533379+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:50.533528+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:51.533684+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:52.533826+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:53.534113+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:54.534272+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:55.534440+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:56.534631+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:57.534775+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:58.535032+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:59.535228+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:00.535430+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:01.535558+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:02.535722+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:03.536225+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:04.536380+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:05.536604+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:06.536752+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:07.536950+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:08.537115+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:09.537287+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:10.537454+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:11.537617+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:12.537733+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:13.537844+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:14.537997+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:15.538130+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:16.538260+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:17.538400+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:18.538497+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:19.538659+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:20.538899+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:21.539110+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:22.539287+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:23.539470+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:24.539660+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:25.539961+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:26.540258+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:27.540588+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:28.540827+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:29.541061+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:30.541255+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:31.541363+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:32.541508+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:33.541682+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:34.541849+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:35.541993+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:36.542151+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:37.542265+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:38.542414+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:39.543326+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:40.543498+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:41.543632+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:42.544119+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:43.545170+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:44.546349+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:45.546629+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:46.546797+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:47.547002+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:48.547430+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:49.547738+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:50.548089+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:51.548261+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:52.548850+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:53.549033+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:54.549498+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:55.549701+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 16.17 MB, 0.03 MB/s
                                           Interval WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:56.550051+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:57.550457+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:58.550718+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:59.550927+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:00.551159+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 253952 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:01.551392+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:02.551547+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 245760 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:03.551684+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:04.551825+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:05.552014+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 229376 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:06.552203+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:07.552471+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 221184 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:08.552738+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:09.552949+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:10.553106+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 212992 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:11.553319+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:12.553781+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 204800 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:13.554236+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:14.554735+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:15.554884+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:16.555062+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:17.555211+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:18.555359+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:19.555563+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:20.555810+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:21.555989+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:22.556167+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:23.556363+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:24.556637+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:25.556835+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:26.557042+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:27.557371+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:28.557641+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:29.557825+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:30.558133+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:31.558383+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:32.558582+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:33.558848+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:34.558988+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:35.559235+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:36.559440+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:37.559671+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:38.559858+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:39.560033+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:40.560240+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:41.560472+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:42.560621+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:43.560759+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:44.560889+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:45.561031+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:46.561267+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:47.561395+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64913408 unmapped: 90112 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:48.561561+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:49.561749+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:50.561969+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:51.562120+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:52.562277+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:53.562483+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:54.562619+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:55.562749+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:56.562900+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:57.563133+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:58.563281+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:59.563423+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:00.563633+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:01.563828+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:02.564037+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:03.564236+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:04.564386+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:05.564539+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:06.564695+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:07.564843+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:08.564962+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:09.565083+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:10.565231+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:11.565358+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:12.565487+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:13.565653+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:14.565786+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:15.565940+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:16.566095+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:17.566262+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:18.566390+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:19.566502+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:20.566670+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:21.566828+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:22.566992+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:23.567116+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:24.567232+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:25.567378+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:26.568028+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:27.568174+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:28.568352+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:29.568469+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:30.568616+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:31.568796+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:32.568925+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:33.569044+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:34.569216+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:35.569339+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:36.569476+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:37.569608+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:38.569865+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:39.570321+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:40.570567+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:41.570724+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:42.570844+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:43.571088+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:44.571325+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:45.571546+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:46.571819+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:47.572044+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:48.573131+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:49.573364+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:50.573909+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:51.574098+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:52.574338+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:53.574555+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:54.574726+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:55.574926+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:56.575216+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:57.575387+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:58.575567+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:59.575731+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:00.575954+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:01.576137+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:02.576377+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:03.576544+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:04.576701+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:05.576848+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:06.577006+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:07.577216+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:08.577440+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:09.577559+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:10.577742+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:11.577901+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:12.578039+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:13.578210+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:14.578331+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:15.578472+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:16.578693+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:17.578846+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:18.578970+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:19.579110+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:20.579270+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:21.579416+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:22.579670+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:23.579786+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:24.579872+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:25.579995+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:26.580144+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:27.580320+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:28.580444+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:29.580570+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:30.580725+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:31.580879+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:32.581055+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:33.581221+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:34.581367+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:35.581519+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:36.581620+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:37.581731+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:38.581837+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:39.581992+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:40.582138+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:41.582245+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:42.582390+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:43.582507+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:44.582704+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:45.582830+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:46.582963+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:47.583085+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:48.583275+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:49.583406+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:50.583566+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:51.583671+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:52.583798+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:53.583919+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:54.584053+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:55.584166+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:56.584297+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:57.584425+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:58.584544+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:59.584681+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:00.585317+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:01.585511+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:02.585670+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:03.585810+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:04.585938+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:05.586070+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:06.586188+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:07.586307+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:08.586440+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:09.586566+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:10.586772+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:11.586911+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:12.587045+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:13.587217+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:14.587374+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:15.587495+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:16.587647+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:17.587763+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:18.587907+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:19.588036+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:20.588215+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:21.588355+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:22.588491+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:23.588620+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:24.588766+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:25.588906+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:26.589044+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:27.589173+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:28.589317+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:42 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:42 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:29.589458+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:30.589655+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:31.589761+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:42 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:32.590063+0000)
Dec 01 21:03:42 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:33.590269+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:34.590412+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:35.590534+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:36.590651+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:37.590796+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:38.590936+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:39.591062+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:40.591256+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:41.591494+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:42.591739+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:43.591913+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:44.592216+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:45.592411+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:46.592550+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:47.592698+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:48.592839+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:49.593017+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:50.593225+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:51.593362+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:52.593505+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:53.593639+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:54.593834+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:55.594036+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:56.594432+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:57.594812+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:58.595227+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:59.596011+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:00.596267+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:01.596435+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:02.596600+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:03.597421+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:04.597562+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:05.597687+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:06.597859+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:07.598017+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:08.598268+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:09.598459+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:10.598670+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:11.598874+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:12.599017+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:13.599141+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:14.599328+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:15.599473+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:16.599599+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:17.599719+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:18.599941+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:19.600140+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:20.600438+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:21.600587+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:22.600718+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:23.600929+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:24.601145+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:25.601441+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:26.601644+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:27.601834+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:28.602089+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:29.602270+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:30.602458+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:31.602958+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:32.603270+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:33.603428+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:34.603774+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:35.603994+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:36.604101+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:37.604320+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:38.604455+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:39.604642+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:40.604875+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:41.605003+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:42.605119+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:43.605248+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:44.605481+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:45.605684+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:46.605962+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:47.606102+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:48.606235+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:49.606414+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:50.606598+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:51.606711+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:52.606835+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:53.606977+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:54.607114+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:55.607250+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:56.607307+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:57.607444+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:58.607674+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:59.607823+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:00.607990+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:01.608145+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:02.608512+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc ms_handle_reset ms_handle_reset con 0x55b346270000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: get_auth_request con 0x55b346b03000 auth_method 0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc handle_mgr_configure stats_period=5
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:03.608654+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:04.608974+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:05.609118+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:06.609392+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:07.609709+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:08.609841+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:09.610058+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:10.610247+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:11.610393+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:12.610513+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:13.610680+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:14.610814+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:15.611129+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:16.611371+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:17.611700+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:18.612042+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:19.612265+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:20.612409+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:21.612615+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:22.612738+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:23.612901+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:24.803503+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:25.803817+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:26.803957+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:27.804174+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:28.804354+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:29.804477+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:30.804624+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:31.804932+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:32.805261+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:33.805520+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:34.805652+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:35.805985+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:36.806171+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:37.806331+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:38.806602+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 729088 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:39.806757+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:40.806926+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:41.807064+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:42.807238+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:43.807370+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:44.807571+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:45.807883+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:46.808013+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:47.808373+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:48.808616+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:49.808830+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:50.809056+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:51.809249+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:52.809364+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:53.809631+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:54.809844+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:55.809956+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:56.810064+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:57.810189+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:58.810290+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:59.810419+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:00.810609+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:01.810734+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:02.810881+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:03.811042+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:04.811307+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:05.811483+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:06.811708+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:07.811898+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:08.812076+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:09.812230+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:10.812391+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:11.812640+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:12.812804+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:13.813032+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:14.813207+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:15.813339+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:16.813476+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:17.813638+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:18.813795+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:19.813922+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:20.814058+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:21.814279+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:22.814423+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:23.814578+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:24.814717+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:25.814847+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:26.814973+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:27.815093+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:28.815220+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:29.815338+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:30.815469+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:31.815620+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:32.815749+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:33.815955+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:34.816096+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:35.816267+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:36.816387+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:37.816504+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:38.816723+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:39.816904+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:40.817079+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:41.817243+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:42.817407+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:43.817538+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:44.817681+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:45.817805+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:46.817993+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:47.818146+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:48.818335+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:49.818529+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:50.818770+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:51.818976+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:52.819155+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:53.819344+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:54.819569+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:55.819733+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:56.819914+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:57.820093+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:58.820273+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:59.820433+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:00.820635+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:01.820900+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:02.821053+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:03.821220+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:04.821356+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:05.821666+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:06.821925+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:07.822134+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:08.822269+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:09.823413+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:10.824282+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:11.825022+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:12.825574+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:13.825955+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:14.826222+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:15.826352+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:16.826477+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:17.827519+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:18.827720+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:19.827920+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:20.828136+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:21.828565+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:22.828700+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:23.829257+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:24.829450+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:25.830042+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:26.830504+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:27.830674+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:28.830843+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:29.831127+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:30.834362+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:31.834583+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:32.834786+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:33.835397+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:34.835902+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:35.836257+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:36.836586+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:37.836828+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:38.837267+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:39.837669+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:40.838033+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:41.838254+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:42.838396+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:43.838546+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:44.838738+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:45.838879+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:46.839055+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:47.839163+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:48.839392+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:49.839516+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:50.839664+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:51.839856+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:52.839969+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:53.840104+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:54.840265+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:55.840368+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:56.840521+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:57.840669+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:58.840824+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:59.840961+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:00.841155+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:01.841294+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:02.841450+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:03.841603+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:04.841691+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:05.841862+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:06.842002+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:07.842147+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:08.842307+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:09.842445+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:10.842605+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:11.842785+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:12.842951+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:13.843089+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:14.843258+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:15.843447+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:16.843638+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:17.843864+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:18.844051+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:19.844201+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:20.844512+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:21.844681+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:22.844823+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:23.845005+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:24.845122+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:25.845241+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:26.845403+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:27.845539+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:28.845716+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:29.845885+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:30.846078+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:31.846261+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:32.846422+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:33.846552+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:34.846742+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:35.846910+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:36.847100+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:37.847286+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:38.847470+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:39.847647+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:40.847838+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:41.847995+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:42.848255+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:43.848439+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:44.848582+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:45.848744+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:46.848830+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:47.848967+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:48.849153+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:49.849306+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:50.849499+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:51.849622+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:52.849730+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:53.849873+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:54.850041+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:55.850272+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:56.850462+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:57.850633+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:58.850817+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:59.850988+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:00.851153+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:01.851247+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:02.851387+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:03.851538+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:04.851669+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:05.851788+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:06.851900+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:07.852045+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:08.852234+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:09.852413+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:10.852611+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:11.852775+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:12.852932+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:13.853106+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:14.853252+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:15.853389+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:16.853530+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:17.853661+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:18.853881+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:19.854021+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:20.854175+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:21.854347+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:22.854510+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:23.854642+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000016s
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:24.854789+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:25.855017+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:26.855161+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:27.855369+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:28.855594+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:29.855851+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:30.856019+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:31.856162+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:32.856318+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:33.856474+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:34.856637+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:35.856774+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:36.856895+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:37.857051+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:38.857172+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:39.857470+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:40.857727+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:41.857881+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:42.858027+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:43.858230+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:44.858431+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:45.858604+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:46.858766+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:47.858902+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:48.859055+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:49.859241+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:50.859392+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:51.859522+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:52.859697+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:53.859946+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:54.860095+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:55.860227+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4156 writes, 19K keys, 4156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4156 writes, 370 syncs, 11.23 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b3445cf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:56.860359+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:57.860536+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:58.860704+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:59.860823+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:00.860985+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:01.861127+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:02.861249+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:03.861500+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:04.861708+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x48af5/0xaf000, compress 0x0/0x0/0x0, omap 0x6fed, meta 0x1a29013), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:05.861827+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490896 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1000.782714844s of 1000.789855957s, submitted: 4
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:06.861953+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 17154048 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:07.862149+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 17154048 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:08.862318+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 16277504 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:09.862510+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd916000/0x0/0x4ffc00000, data 0x84b706/0x8b6000, compress 0x0/0x0/0x0, omap 0x7804, meta 0x1a287fc), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 67 ms_handle_reset con 0x55b346dbec00 session 0x55b346db08c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 16244736 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:10.862699+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546336 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd911000/0x0/0x4ffc00000, data 0x84ccef/0x8b9000, compress 0x0/0x0/0x0, omap 0x7ba2, meta 0x1a2845e), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 16244736 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483eb800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:11.862830+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 16089088 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd911000/0x0/0x4ffc00000, data 0x84ccef/0x8b9000, compress 0x0/0x0/0x0, omap 0x7c58, meta 0x1a283a8), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:12.862979+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 ms_handle_reset con 0x55b3483eb800 session 0x55b347b12540
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:13.863258+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:14.863939+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:15.864920+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:16.865120+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:17.865255+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:18.865434+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:19.865603+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:20.865835+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:21.865990+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:22.866172+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:23.866405+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:24.866594+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:26.275821+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:27.276038+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:28.276256+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:29.276490+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:30.276699+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:31.276940+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:32.277132+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:33.277336+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:34.277486+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:35.277694+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:36.277834+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:37.277976+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:38.278094+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:39.278441+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:40.278925+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:41.279085+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:42.279245+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:43.279400+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:44.279537+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:45.279672+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 16154624 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd90e000/0x0/0x4ffc00000, data 0x84e2f4/0x8bc000, compress 0x0/0x0/0x0, omap 0x7f30, meta 0x1a280d0), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15050 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548948 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:46.279836+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483eb400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.820827484s of 40.443630219s, submitted: 51
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 16023552 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 69 ms_handle_reset con 0x55b3483eb400 session 0x55b3481a01c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:47.280113+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 16023552 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:48.280332+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 16023552 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:49.280523+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 24199168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:50.280664+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 24199168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b3483ebc00 session 0x55b345fbddc0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b3468da000 session 0x55b3481b8000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604376 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fd104000/0x0/0x4ffc00000, data 0x1050f0a/0x10c6000, compress 0x0/0x0/0x0, omap 0x8976, meta 0x1a2768a), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:51.280911+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 24199168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b3468da000 session 0x55b345fbd880
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 70 ms_handle_reset con 0x55b346dbec00 session 0x55b346d6a1c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 71 ms_handle_reset con 0x55b3468da800 session 0x55b345f5ba40
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:52.281085+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 23912448 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468db000 session 0x55b3481cdc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468dac00 session 0x55b3445f4380
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:53.281275+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468da000 session 0x55b345fbc700
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 72 ms_handle_reset con 0x55b3468da800 session 0x55b345fbcfc0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 23969792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:54.281794+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 23969792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 73 ms_handle_reset con 0x55b3468db000 session 0x55b345fbc540
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:55.282124+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 23896064 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 572149 data_alloc: 218103808 data_used: 1251
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 74 ms_handle_reset con 0x55b346dbec00 session 0x55b347b128c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:56.282328+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.722087860s of 10.010193825s, submitted: 149
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 23535616 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 75 ms_handle_reset con 0x55b3468db400 session 0x55b3481e7a40
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd8fb000/0x0/0x4ffc00000, data 0x856722/0x8ce000, compress 0x0/0x0/0x0, omap 0x99f7, meta 0x1a26609), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:57.282666+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 23568384 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:58.282931+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 23576576 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 75 ms_handle_reset con 0x55b3468da000 session 0x55b3481b8a80
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:59.283094+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 23543808 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 77 ms_handle_reset con 0x55b3468da800 session 0x55b346037c00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:00.283240+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 23732224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 584510 data_alloc: 218103808 data_used: 5312
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fd8f1000/0x0/0x4ffc00000, data 0x85a925/0x8d7000, compress 0x0/0x0/0x0, omap 0xa69a, meta 0x1a25966), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:01.283567+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 23732224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:02.283860+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 23691264 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:03.284045+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 23683072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 78 ms_handle_reset con 0x55b3468db000 session 0x55b346da4540
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbec00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:04.284246+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 23683072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd8eb000/0x0/0x4ffc00000, data 0x85d3da/0x8dd000, compress 0x0/0x0/0x0, omap 0xabc3, meta 0x1a2543d), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 79 ms_handle_reset con 0x55b3468dd800 session 0x55b3481ccfc0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:05.284393+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 79 ms_handle_reset con 0x55b346dbec00 session 0x55b3481e61c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 80 ms_handle_reset con 0x55b3468da000 session 0x55b346cc7a40
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 22429696 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 597803 data_alloc: 218103808 data_used: 5897
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:06.284581+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 81 ms_handle_reset con 0x55b3468dd400 session 0x55b346db0700
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fd8e1000/0x0/0x4ffc00000, data 0x86155c/0x8e7000, compress 0x0/0x0/0x0, omap 0xb3b6, meta 0x1a24c4a), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 22364160 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.186422348s of 10.314285278s, submitted: 71
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fd8e1000/0x0/0x4ffc00000, data 0x86155c/0x8e7000, compress 0x0/0x0/0x0, omap 0xb3b6, meta 0x1a24c4a), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:07.284761+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 82 ms_handle_reset con 0x55b3468dd800 session 0x55b346cc6c40
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 22315008 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 82 ms_handle_reset con 0x55b3468db800 session 0x55b346cc6380
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:08.284964+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 22315008 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:09.285144+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 84 ms_handle_reset con 0x55b3468da400 session 0x55b3481a1a40
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:10.285291+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 620389 data_alloc: 218103808 data_used: 5897
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:11.285467+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 85 ms_handle_reset con 0x55b3468da000 session 0x55b347a8aa80
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:12.285615+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 22102016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fd8d1000/0x0/0x4ffc00000, data 0x868324/0x8f9000, compress 0x0/0x0/0x0, omap 0xc413, meta 0x1a23bed), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 85 handle_osd_map epochs [85,86], i have 86, src has [1,86]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:13.285782+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 21037056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:14.285992+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 21020672 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fd8cc000/0x0/0x4ffc00000, data 0x8697d4/0x8fc000, compress 0x0/0x0/0x0, omap 0xc6bc, meta 0x1a23944), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:15.286124+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 87 ms_handle_reset con 0x55b3468da400 session 0x55b3481e6e00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 20963328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 628999 data_alloc: 218103808 data_used: 5897
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:16.286387+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 88 ms_handle_reset con 0x55b3468db800 session 0x55b3481cca80
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 88 ms_handle_reset con 0x55b3468dd400 session 0x55b346db1180
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 20701184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.035199165s of 10.521741867s, submitted: 115
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:17.286554+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 89 ms_handle_reset con 0x55b3468dac00 session 0x55b3481cd880
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 20275200 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:18.286760+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 20193280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:19.286919+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 19988480 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 90 ms_handle_reset con 0x55b3468dac00 session 0x55b346d6a540
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:20.287169+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 19849216 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fd8a5000/0x0/0x4ffc00000, data 0x891dd1/0x927000, compress 0x0/0x0/0x0, omap 0xd692, meta 0x1a2296e), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 640844 data_alloc: 218103808 data_used: 16102
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 91 ms_handle_reset con 0x55b3468da400 session 0x55b345f5b500
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:21.287862+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 19816448 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 91 ms_handle_reset con 0x55b3468da000 session 0x55b3481fce00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dd400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:22.288052+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 19726336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 92 ms_handle_reset con 0x55b3468db800 session 0x55b346da4700
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 92 ms_handle_reset con 0x55b3468dd400 session 0x55b3482116c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:23.288266+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fc6f9000/0x0/0x4ffc00000, data 0x894d0a/0x92f000, compress 0x0/0x0/0x0, omap 0xe0c5, meta 0x2bc1f3b), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 19644416 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:24.288420+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 93 ms_handle_reset con 0x55b3468da000 session 0x55b346cc61c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19603456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fc6f9000/0x0/0x4ffc00000, data 0x894d0a/0x92f000, compress 0x0/0x0/0x0, omap 0xe290, meta 0x2bc1d70), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da400
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 93 ms_handle_reset con 0x55b3468da400 session 0x55b3481e6c40
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dac00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:25.288567+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19611648 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 94 ms_handle_reset con 0x55b3468dac00 session 0x55b346d6a540
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 650615 data_alloc: 218103808 data_used: 20193
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:26.288757+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19587072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.253656387s of 10.031966209s, submitted: 178
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:27.288888+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 95 ms_handle_reset con 0x55b3468db800 session 0x55b346d6afc0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 19546112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 96 ms_handle_reset con 0x55b3468da800 session 0x55b3460361c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:28.289011+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 18399232 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:29.289331+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 18399232 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc6e8000/0x0/0x4ffc00000, data 0x89a4c9/0x93c000, compress 0x0/0x0/0x0, omap 0xee55, meta 0x2bc11ab), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:30.289443+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 18399232 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc6e8000/0x0/0x4ffc00000, data 0x89a4c9/0x93c000, compress 0x0/0x0/0x0, omap 0xee55, meta 0x2bc11ab), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664249 data_alloc: 218103808 data_used: 20163
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:31.289587+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da800 session 0x55b347eaa700
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da000 session 0x55b3481a1340
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468db800 session 0x55b3481e7c00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 18391040 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346aa8800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b346aa8800 session 0x55b346da4e00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dc800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468dc800 session 0x55b346da5500
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3483ebc00 session 0x55b3481cd500
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da000 session 0x55b345fbdc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468da800 session 0x55b346d6a700
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468db800 session 0x55b346d6a1c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dc800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 ms_handle_reset con 0x55b3468dc800 session 0x55b346da4a80
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:32.289695+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 18391040 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 97 handle_osd_map epochs [97,98], i have 98, src has [1,98]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:33.289838+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 18333696 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 98 ms_handle_reset con 0x55b3468da000 session 0x55b3481cce00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:34.289971+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x89b9dd/0x940000, compress 0x0/0x0/0x0, omap 0xf1de, meta 0x2bc0e22), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:35.290100+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 667121 data_alloc: 218103808 data_used: 20217
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:36.290231+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:37.290359+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 18309120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:38.290467+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124558449s of 11.240477562s, submitted: 51
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 99 ms_handle_reset con 0x55b3483ebc00 session 0x55b347b128c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346aa8800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 18292736 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x89b9dd/0x940000, compress 0x0/0x0/0x0, omap 0xf1de, meta 0x2bc0e22), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:39.290632+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b346aa8800 session 0x55b3481e76c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbfc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b346dbfc00 session 0x55b348210380
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b3468dcc00 session 0x55b346d6a8c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 17965056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b3468dcc00 session 0x55b347a8b340
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89e45a/0x946000, compress 0x0/0x0/0x0, omap 0xf794, meta 0x2bc086c), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:40.290754+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 100 ms_handle_reset con 0x55b3468da000 session 0x55b3481a0a80
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346aa8800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 17965056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 678178 data_alloc: 218103808 data_used: 20233
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:41.290910+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b346aa8800 session 0x55b3481cc1c0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:42.291048+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b346dbfc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b346dbfc00 session 0x55b346db1c00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b3483ebc00 session 0x55b346036380
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:43.291173+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3483ebc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b3483ebc00 session 0x55b346da4fc0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:44.291357+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 102 ms_handle_reset con 0x55b3468da000 session 0x55b3481a0540
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:45.291517+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 102 heartbeat osd_stat(store_statfs(0x4fc6df000/0x0/0x4ffc00000, data 0x8a1096/0x94c000, compress 0x0/0x0/0x0, omap 0xfd61, meta 0x2bc029f), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 17866752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 103 ms_handle_reset con 0x55b3468dcc00 session 0x55b348210e00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:46.291641+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 682643 data_alloc: 218103808 data_used: 20268
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 17842176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:47.291849+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 103 ms_handle_reset con 0x55b3468da800 session 0x55b347eaae00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 103 ms_handle_reset con 0x55b3468db800 session 0x55b3481fd340
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fc6da000/0x0/0x4ffc00000, data 0x8a2691/0x94e000, compress 0x0/0x0/0x0, omap 0x10093, meta 0x2bbff6d), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:48.292074+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 17932288 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fc6da000/0x0/0x4ffc00000, data 0x8a3b4d/0x950000, compress 0x0/0x0/0x0, omap 0x103c5, meta 0x2bbfc3b), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:49.292262+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.364825249s of 10.770331383s, submitted: 84
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 105 ms_handle_reset con 0x55b3468da000 session 0x55b3481cddc0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:50.292414+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:51.293011+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687103 data_alloc: 218103808 data_used: 24212
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:52.293172+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc6d6000/0x0/0x4ffc00000, data 0x8a514b/0x952000, compress 0x0/0x0/0x0, omap 0x1066f, meta 0x2bbf991), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:53.293354+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 17915904 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:54.293527+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468dd800 session 0x55b345f5bc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468db000 session 0x55b3481e6000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468db800
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 17907712 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468db800 session 0x55b346d6ac40
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:55.293688+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 17891328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468dcc00
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 106 ms_handle_reset con 0x55b3468dcc00 session 0x55b345efc540
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: handle_auth_request added challenge on 0x55b3468da000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:56.293832+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 686344 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 17989632 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fc6fc000/0x0/0x4ffc00000, data 0x8825f4/0x930000, compress 0x0/0x0/0x0, omap 0x10a89, meta 0x2bbf577), peers [0,1] op hist [1])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 107 ms_handle_reset con 0x55b3468da000 session 0x55b347eab500
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:57.293975+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:58.294108+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:59.294255+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:00.294436+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:01.294613+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692853 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:02.294849+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6f2000/0x0/0x4ffc00000, data 0x8850f1/0x936000, compress 0x0/0x0/0x0, omap 0x11068, meta 0x2bbef98), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:03.294986+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 108 handle_osd_map epochs [108,109], i have 109, src has [1,109]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.772667885s of 14.188973427s, submitted: 90
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 18014208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:04.295123+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 18014208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:05.295261+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 18014208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:06.295405+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:07.295521+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:08.295691+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:09.295795+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:10.295894+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:11.296046+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:12.296212+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _renew_subs
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:13.296574+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:14.296696+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:15.296818+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:16.296926+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:17.297074+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:18.297267+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:19.297439+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:20.297621+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:21.297836+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:22.297979+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:23.298112+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:24.298289+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:25.298452+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:26.298600+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:27.298735+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:28.298868+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:29.299023+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:30.299166+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:31.299411+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:32.299668+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:33.299793+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:34.299980+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:35.300097+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:36.300252+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:37.300451+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:38.300636+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:39.300857+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:40.300998+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:41.301286+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:42.301521+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:43.301655+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:44.301802+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:45.301986+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:46.302140+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:47.302238+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:48.302541+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:49.302684+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:50.302897+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:51.303060+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:52.303260+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:53.303419+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:54.303610+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:55.303819+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:56.304081+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:57.304246+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:58.304418+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:59.304612+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:00.304815+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:01.305081+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:02.305260+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:03.305442+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:04.305632+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:05.305835+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:06.306074+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:07.306257+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:08.306445+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:09.306623+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:10.306897+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:11.307261+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:12.307479+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:13.307632+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:14.307796+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:15.307959+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:16.308134+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:17.308226+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:18.309541+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:19.309699+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 17981440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:20.309867+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 17932288 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:21.310067+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config diff' '{prefix=config diff}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config show' '{prefix=config show}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 17760256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:22.310252+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 17563648 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:23.310444+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 17498112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:24.310595+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 17498112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:25.310801+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'log dump' '{prefix=log dump}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 17498112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:26.311019+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'perf dump' '{prefix=perf dump}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'perf schema' '{prefix=perf schema}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:27.311239+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:28.311414+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:29.311592+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:30.311775+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:31.311950+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:32.312110+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:33.312260+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:34.312409+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:35.312586+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:36.312751+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:37.312971+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:38.313135+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:39.313251+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:40.313401+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:41.313594+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:42.313733+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:43.313914+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:44.314076+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:45.314253+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:46.314396+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:47.314519+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:48.314648+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:49.314888+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:50.315042+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:51.315199+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:52.315344+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:53.315520+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:54.315699+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:55.315884+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:56.316070+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:57.316229+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:58.316353+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:59.316619+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:00.316852+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:01.317077+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:02.317211+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:03.317389+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:04.317559+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 17121280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:05.317749+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:06.317900+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:07.318074+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:08.318266+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:09.318444+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:10.318648+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:11.318836+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:12.319030+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:13.319237+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:14.319421+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:15.319620+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:16.319806+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:17.320039+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:18.320222+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:19.320441+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:20.320597+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:21.320796+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:22.321030+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:23.321218+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:24.321711+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:25.322112+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:26.322325+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:27.322664+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:28.322946+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:29.323167+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:30.323375+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:31.323608+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:32.323844+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:33.324043+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:34.324259+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:35.324414+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:36.324651+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:37.324837+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:38.325005+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:39.325222+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:40.325401+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:41.325636+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:42.325772+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:43.325998+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:44.326156+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:45.326334+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:46.326738+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:47.326923+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:48.327085+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:49.327344+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:50.327535+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:51.327763+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:52.327940+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:53.328167+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:54.328414+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 17113088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:55.328619+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:56.328803+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:57.329026+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:58.329327+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:59.329500+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:00.329683+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:01.329834+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:02.329962+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:03.330118+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:04.330298+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:05.330490+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:06.330651+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 17104896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:07.330838+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:08.331060+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:09.331252+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:10.331514+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:11.331727+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:12.331855+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:13.332016+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:14.332277+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:15.332482+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:16.332688+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:17.332861+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:18.333087+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:19.333288+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:20.333551+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:21.333805+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:22.334016+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:23.334220+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:24.334411+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:25.334646+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:26.334871+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:27.335091+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:28.335275+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:29.335470+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:30.336235+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:31.336378+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:32.336581+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:33.336768+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:34.336966+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:35.337289+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:36.337549+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:37.337657+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:38.337811+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:39.338004+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:40.338298+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:41.338496+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:42.338668+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:43.338904+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:44.339090+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:45.339229+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 17096704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:46.339398+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:47.339586+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:48.339766+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:49.339940+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:50.340086+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:51.340266+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:52.340439+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:53.340588+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:54.340780+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:55.340942+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:56.341145+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:57.341407+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:58.341606+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:59.341784+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:00.342034+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 17088512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:01.342420+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:02.342639+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:03.342864+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:04.343022+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:05.343226+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:06.343329+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:07.343548+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:08.343679+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:09.343819+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:10.343940+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:11.344119+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:12.344298+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:13.344446+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:14.344590+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:15.344737+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:16.344901+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:17.345557+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:18.346245+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:19.347720+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:20.349018+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:21.350162+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:22.350590+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:23.350841+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:24.351020+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:25.351616+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:26.352126+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:27.352727+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:28.352923+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:29.353151+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:30.353470+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:31.353708+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:32.353897+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:33.354276+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:34.354459+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:35.354717+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:36.354866+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:37.355062+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:38.355286+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:39.355451+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:40.355645+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:41.355898+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:42.356271+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:43.356456+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:44.356623+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 17080320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:45.356776+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:46.356971+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:47.357260+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:48.357440+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:49.357658+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:50.357903+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:51.358150+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:52.358397+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:53.358575+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:54.358723+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:55.358965+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:56.359115+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:57.359269+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:58.359440+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:59.359645+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:00.359833+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 17072128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:01.360058+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:02.360286+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:03.360548+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:04.360704+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:05.360923+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:06.361094+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:07.361271+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:08.361515+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:09.361735+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:10.361880+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:11.362031+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:12.362203+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:13.362374+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:14.362641+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:15.362862+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:16.363025+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:17.363309+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:18.363568+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:19.363831+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:20.364017+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:21.364285+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:22.364549+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:23.364862+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:24.365073+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:25.365264+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:26.365509+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:27.365725+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:28.365908+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:29.366272+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:30.366549+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:31.366885+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:32.367113+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:33.367306+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:34.367549+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:35.367820+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:36.368050+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:37.368285+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:38.368528+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:39.368791+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:40.369038+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:41.369297+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:42.369567+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:43.369862+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:44.370123+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:45.370345+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:46.370523+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:47.370685+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:48.370916+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:49.371099+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:50.371263+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:51.371481+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:52.371629+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:53.371818+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:54.371989+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:55.372305+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:56.372582+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:57.372773+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:58.372905+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:59.373119+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:00.373318+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:01.373570+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:02.373790+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:03.373998+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:04.374251+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:05.374476+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:06.374684+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:07.374940+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:08.375115+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:09.375353+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:10.375637+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:11.375888+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:12.376039+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:13.376197+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:14.376429+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:15.376569+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:16.376715+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:17.376876+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:18.377027+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:19.377164+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:20.377291+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:21.377485+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:22.377656+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:23.377822+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:24.377979+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:25.378243+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:26.378457+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 17063936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:27.378617+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:28.378780+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:29.378917+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:30.379094+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:31.379348+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:32.379517+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:33.379706+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:34.379859+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:35.380082+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:36.380272+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:37.380427+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:38.380611+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:39.380783+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:40.380960+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:41.381143+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:42.381359+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:43.381751+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:44.381925+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 17055744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:45.382073+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:46.382248+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:47.382412+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:48.382550+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:49.382893+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:50.383092+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:51.383458+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:52.383707+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:53.383886+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:54.384151+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:55.384371+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:56.384590+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:57.384840+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:58.385022+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:59.385301+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:00.385568+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:01.385976+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:02.386136+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:03.386319+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:04.386545+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 17047552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:05.386730+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:06.387077+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:07.387226+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:08.387390+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:09.387566+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:10.387850+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:11.388075+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:12.388263+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:13.388410+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:14.388567+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:15.388745+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:16.388941+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:17.389115+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:18.389257+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:19.389406+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:20.389538+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:21.389869+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:22.390058+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:23.390248+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 17039360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:24.390525+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:25.391350+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:26.391546+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:27.391728+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:28.391891+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:29.392049+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:30.392284+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:31.392510+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:32.392647+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:33.392849+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:34.393062+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:35.393265+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:36.393457+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:37.393642+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:38.393790+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:39.393924+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 17031168 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:40.394116+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:41.394372+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:42.394608+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:43.394826+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:44.395039+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:45.395246+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:46.395405+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:47.395610+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:48.395836+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:49.396101+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:50.396376+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:51.396645+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:52.396814+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:53.396999+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:54.397177+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:55.397383+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5583 writes, 23K keys, 5583 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5583 writes, 995 syncs, 5.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1427 writes, 3716 keys, 1427 commit groups, 1.0 writes per commit group, ingest: 2.24 MB, 0.00 MB/s
                                           Interval WAL: 1427 writes, 625 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:56.397572+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 17022976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:57.397757+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 17006592 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:58.397927+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 17006592 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:59.398113+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 17006592 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:00.398287+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 17006592 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:01.398488+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 17006592 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:02.398611+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 17006592 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc ms_handle_reset ms_handle_reset con 0x55b346b03000
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: get_auth_request con 0x55b3468dc800 auth_method 0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: mgrc handle_mgr_configure stats_period=5
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:03.398949+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:04.399117+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:05.399276+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:06.399385+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:07.399587+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:08.399738+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:09.399918+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:10.400048+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:11.400225+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:12.400386+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:13.400540+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:14.400687+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:15.400969+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:16.401140+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:17.401249+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:18.401382+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:19.401542+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:20.401720+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:21.401905+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:22.402073+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:23.402285+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:24.402453+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:25.402610+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:26.402782+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:27.402933+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:28.403068+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:29.403281+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:30.403425+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:31.403583+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:32.403789+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:33.403953+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:34.404064+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:35.404226+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:36.404394+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:37.404540+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:38.404711+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:39.404869+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:40.405056+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:41.405307+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:42.405491+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:43.405681+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:44.405876+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:45.406051+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:46.406317+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:47.406717+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:48.406930+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:49.407081+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:50.407242+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:51.407456+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:52.407613+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:53.407812+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:54.407969+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:55.408176+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:56.408441+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:57.408635+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:58.408821+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:59.408974+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:00.409145+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:01.409432+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:02.409615+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:03.409797+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:04.409946+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:05.410065+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:06.410206+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:07.410325+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:08.410451+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:43 compute-0 ceph-osd[88745]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:43 compute-0 ceph-osd[88745]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695163 data_alloc: 218103808 data_used: 22129
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 16801792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:09.410616+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 17154048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config diff' '{prefix=config diff}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config show' '{prefix=config show}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:10.410744+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 16793600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x8865a1/0x939000, compress 0x0/0x0/0x0, omap 0x113ad, meta 0x2bbec53), peers [0,1] op hist [])
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:11.410892+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 16793600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: tick
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_tickets
Dec 01 21:03:43 compute-0 ceph-osd[88745]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:12.411055+0000)
Dec 01 21:03:43 compute-0 ceph-osd[88745]: do_command 'log dump' '{prefix=log dump}'
Dec 01 21:03:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 01 21:03:43 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175554840' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 01 21:03:43 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15054 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:43 compute-0 ceph-mon[75880]: from='client.15046 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:43 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/735797386' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 01 21:03:43 compute-0 ceph-mon[75880]: from='client.15050 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:43 compute-0 ceph-mon[75880]: pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:43 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1175554840' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 01 21:03:43 compute-0 ceph-mon[75880]: from='client.15054 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:43 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15058 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:43 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 01 21:03:43 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822711834' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 21:03:44 compute-0 podman[265394]: 2025-12-01 21:03:44.103970382 +0000 UTC m=+0.060333021 container health_status 1f841d4d1a04af964c84b20123789b7ebf1609dd2296a671b67c4a352399ab66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 01 21:03:44 compute-0 podman[265395]: 2025-12-01 21:03:44.147602349 +0000 UTC m=+0.095890065 container health_status b580d2c4ad0224944e4571c4d3b0a7ffaef83eb4156024ea58101572076971b4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 01 21:03:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:03:44.368 155855 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 01 21:03:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:03:44.368 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 01 21:03:44 compute-0 ovn_metadata_agent[155839]: 2025-12-01 21:03:44.368 155855 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 01 21:03:44 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15060 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:44 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 01 21:03:44 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414691837' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 21:03:44 compute-0 ceph-mon[75880]: from='client.15058 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:44 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/822711834' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 01 21:03:44 compute-0 ceph-mon[75880]: from='client.15060 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:44 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3414691837' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 01 21:03:44 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15064 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:45 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 01 21:03:45 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62311913' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 21:03:45 compute-0 crontab[265616]: (root) LIST (root)
Dec 01 21:03:45 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15068 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:45 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 01 21:03:45 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259491495' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 21:03:45 compute-0 ceph-mon[75880]: from='client.15064 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:45 compute-0 ceph-mon[75880]: pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:45 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/62311913' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 01 21:03:45 compute-0 ceph-mon[75880]: from='client.15068 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:45 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/259491495' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 01 21:03:45 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15072 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 01 21:03:46 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433981880' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 21:03:46 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15076 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:46 compute-0 ceph-mon[75880]: from='client.15072 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:46 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1433981880' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 01 21:03:46 compute-0 ceph-mon[75880]: from='client.15076 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:46 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 01 21:03:46 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842712939' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 01 21:03:46 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15080 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:47 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:47 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15082 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:47 compute-0 ceph-mgr[76174]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 21:03:47 compute-0 ceph-dcf60a89-bba0-58b0-a1bf-d4bde723201b-mgr-compute-0-xhvuzu[76170]: 2025-12-01T21:03:47.252+0000 7f311224f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=0 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000048
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:47 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001362 2 0.000033
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:47 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.000863 2 0.000043
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.012067 2 0.000137
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.012114 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000071 1 0.000099
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.123812 2 0.000198
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.123936 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.149454 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.219730 2 0.000046
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.219765 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000065 1 0.000089
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.330943 2 0.000106
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.331002 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000103 1 0.000127
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.123331 2 0.000170
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.123448 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.357608 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.413463 2 0.000058
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.413562 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000104 1 0.000168
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.093590 2 0.000291
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.093784 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.439382 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.026097 2 0.000291
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.026488 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=-1 lpr=45 pi=[42,45)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.454826 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 1859584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:25.321496+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 46 handle_osd_map epochs [46,47], i have 47, src has [1,47]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997717 2 0.000072
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 0.998754 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999170 2 0.000063
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.000718 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001775 3 0.000189
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000129 1 0.000067
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 lc 33'11 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002413 3 0.000331
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.185998 3 0.000090
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.184259 3 0.000043
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 lc 33'17 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000147 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=46/47 n=2 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.058968 1 0.000110
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=46/47 n=1 ec=39/23 lis/c=46/39 les/c/f=47/41/0 sis=46) [1] r=0 lpr=46 pi=[39,46)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0xa1fef/0xef000, compress 0x0/0x0/0x0, omap 0x8155, meta 0x1a27eab), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:26.321650+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 368387 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 47 handle_osd_map epochs [47,48], i have 47, src has [1,48]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 5.358413 13 0.000127
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 6.071763 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 7.082663 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 7.082702 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934832573s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 active pruub 105.293449402s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] exit Reset 0.000165 1 0.000248
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 5.648972 13 0.000123
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] enter Started
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 6.073181 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 7.084147 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] enter Start
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] exit Start 0.000015 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.934741974s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293449402s@ mbc={}] enter Started/Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 7.084759 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933793068s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 active pruub 105.293289185s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] exit Reset 0.000254 1 0.000987
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] enter Started
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] enter Start
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] exit Start 0.000054 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 48 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48 pruub=9.933597565s) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY pruub 105.293289185s@ mbc={}] enter Started/Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 48 handle_osd_map epochs [48,48], i have 48, src has [1,48]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 48 heartbeat osd_stat(store_statfs(0x4fe0d5000/0x0/0x4ffc00000, data 0xa39b1/0xf3000, compress 0x0/0x0/0x0, omap 0x8556, meta 0x1a27aaa), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:27.321863+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:56.761858+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.1d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:33:56.772403+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.1d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 15)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:56.761858+0000 osd.1 (osd.1) 14 : cluster [DBG] 7.1d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:33:56.772403+0000 osd.1 (osd.1) 15 : cluster [DBG] 7.1d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.025774 7 0.000163
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 1.027151 7 0.000149
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.070994 2 0.000053
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.071043 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000136 1 0.000103
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.198140 2 0.000028
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.198184 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000150 1 0.000089
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.130997 2 0.000247
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.131232 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.228187 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.025814 2 0.000125
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.026039 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=-1 lpr=48 pi=[42,48)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 1.251469 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:28.322140+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:29.322287+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.882609367s of 10.177284241s, submitted: 470
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:30.322427+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0xa63a7/0xf7000, compress 0x0/0x0/0x0, omap 0x8af5, meta 0x1a2750b), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:31.322573+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372826 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:32.322793+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:01.830326+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.1a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:01.840795+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.1a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 17)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:01.830326+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.1a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:01.840795+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.1a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:33.323045+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:34.323287+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:03.782069+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:03.792520+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0xa8fd3/0xfd000, compress 0x0/0x0/0x0, omap 0x8fed, meta 0x1a27013), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 19)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:03.782069+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:03.792520+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:35.323523+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65396736 unmapped: 655360 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:36.323637+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:05.789505+0000 osd.1 (osd.1) 20 : cluster [DBG] 7.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:05.800088+0000 osd.1 (osd.1) 21 : cluster [DBG] 7.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 384425 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 21)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:05.789505+0000 osd.1 (osd.1) 20 : cluster [DBG] 7.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:05.800088+0000 osd.1 (osd.1) 21 : cluster [DBG] 7.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 638976 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:37.323891+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:06.836789+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:06.847390+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 23)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:06.836789+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:06.847390+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:38.324131+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 630784 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 53 heartbeat osd_stat(store_statfs(0x4fe0cc000/0x0/0x4ffc00000, data 0xaa453/0x100000, compress 0x0/0x0/0x0, omap 0x9295, meta 0x1a26d6b), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:39.324299+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:08.770083+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.13 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:08.780668+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.13 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.737783 33 0.000147
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.744374 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 19.755618 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 19.755692 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262128830s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 active pruub 121.294456482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] exit Reset 0.000148 1 0.000239
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] enter Started
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] enter Start
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] exit Start 0.000014 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 54 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=13.262031555s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 121.294456482s@ mbc={}] enter Started/Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016062737s of 10.056778908s, submitted: 17
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 25)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:08.770083+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.13 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:08.780668+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.13 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.490505 7 0.000176
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000109 1 0.000048
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002053 1 0.000073
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002232 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] lb MIN local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.492809 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 565248 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:40.324512+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:09.803702+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:09.814261+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 27)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:09.803702+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:09.814261+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 19.321074 37 0.000157
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 19.325632 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.334118 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.334164 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678371429s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 active pruub 122.300827026s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] exit Reset 0.000198 1 0.000270
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] enter Started
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] enter Start
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] exit Start 0.000014 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 56 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=12.678308487s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 122.300827026s@ mbc={}] enter Started/Stray
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:41.324773+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:10.775481+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.17 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:10.786021+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.17 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401703 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 29)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:10.775481+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.17 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:10.786021+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.17 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:42.325106+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.862351 6 0.000122
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001592 2 0.000153
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.001996 1 0.000059
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.003655 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] lb MIN local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=-1 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.866133 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:43.325272+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 57 heartbeat osd_stat(store_statfs(0x4fe0bc000/0x0/0x4ffc00000, data 0xaf97f/0x10c000, compress 0x0/0x0/0x0, omap 0x9c8a, meta 0x1a26376), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 475136 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:44.325469+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:13.732814+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.16 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:13.743384+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.16 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b(unlocked)] enter Initial
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=0 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000145 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=0 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000059
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000725 1 0.000112
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000626 2 0.000147
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 58 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 31)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:13.732814+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.16 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:13.743384+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.16 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 58 handle_osd_map epochs [58,59], i have 59, src has [1,59]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.528370 2 0.000125
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.529873 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002653 4 0.000158
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000161 1 0.000075
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007832 2 0.000083
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=58/59 n=1 ec=39/23 lis/c=58/45 les/c/f=59/46/0 sis=58) [1] r=0 lpr=58 pi=[45,58)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 458752 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:45.325737+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:46.325986+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414414 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 450560 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:47.326166+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 59 heartbeat osd_stat(store_statfs(0x4fe0b6000/0x0/0x4ffc00000, data 0xb25b3/0x112000, compress 0x0/0x0/0x0, omap 0xa208, meta 0x1a25df8), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d(unlocked)] enter Initial
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=0 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000119 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=0 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000032 1 0.000055
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000152 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000206 1 0.000378
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000653 2 0.000089
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000028 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 60 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 434176 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:48.326360+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 60 handle_osd_map epochs [60,61], i have 61, src has [1,61]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.634047 2 0.000290
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.635071 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.003035 3 0.000182
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000099 1 0.000059
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 61 handle_osd_map epochs [61,61], i have 61, src has [1,61]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.117056 3 0.000063
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=60/61 n=1 ec=39/23 lis/c=60/48 les/c/f=61/49/0 sis=60) [1] r=0 lpr=60 pi=[48,60)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 1482752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:49.326532+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.887034416s of 10.012774467s, submitted: 34
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1466368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:50.327370+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:19.816514+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:19.827233+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 33)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:19.816514+0000 osd.1 (osd.1) 32 : cluster [DBG] 3.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:19.827233+0000 osd.1 (osd.1) 33 : cluster [DBG] 3.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1449984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:51.328152+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:20.812529+0000 osd.1 (osd.1) 34 : cluster [DBG] 7.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:20.823358+0000 osd.1 (osd.1) 35 : cluster [DBG] 7.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ae000/0x0/0x4ffc00000, data 0xb66a5/0x11c000, compress 0x0/0x0/0x0, omap 0xab14, meta 0x1a254ec), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 434876 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 35)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:20.812529+0000 osd.1 (osd.1) 34 : cluster [DBG] 7.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:20.823358+0000 osd.1 (osd.1) 35 : cluster [DBG] 7.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 1433600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:52.328364+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1409024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:53.328562+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:22.783748+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:22.794243+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 37)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:22.783748+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:22.794243+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1392640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:54.329392+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0a8000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 1392640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:55.329642+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:56.329787+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439595 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:57.329959+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0a8000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0a8000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1351680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:58.330081+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:27.839650+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:27.850211+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 39)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:27.839650+0000 osd.1 (osd.1) 38 : cluster [DBG] 3.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:27.850211+0000 osd.1 (osd.1) 39 : cluster [DBG] 3.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 1343488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:59.330371+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.989737511s of 10.015211105s, submitted: 10
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:00.330527+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:29.831633+0000 osd.1 (osd.1) 40 : cluster [DBG] 3.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:29.842227+0000 osd.1 (osd.1) 41 : cluster [DBG] 3.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 41)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:29.831633+0000 osd.1 (osd.1) 40 : cluster [DBG] 3.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:29.842227+0000 osd.1 (osd.1) 41 : cluster [DBG] 3.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:01.330787+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:30.791870+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:30.802400+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1327104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446108 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 43)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:30.791870+0000 osd.1 (osd.1) 42 : cluster [DBG] 3.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:30.802400+0000 osd.1 (osd.1) 43 : cluster [DBG] 3.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:02.330969+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1327104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:03.331146+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1327104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:04.331408+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:33.777787+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.0 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:33.788263+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.0 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 45)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:33.777787+0000 osd.1 (osd.1) 44 : cluster [DBG] 3.0 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:33.788263+0000 osd.1 (osd.1) 45 : cluster [DBG] 3.0 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:05.331876+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 1310720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:06.332118+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:35.781237+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.0 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:35.791800+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.0 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1302528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 450930 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 47)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:35.781237+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.0 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:35.791800+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.0 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:07.332487+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1302528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:08.332624+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:09.332861+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:10.333081+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:11.333279+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 450930 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:12.333471+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.065758705s of 13.081768036s, submitted: 8
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:13.333658+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:42.913497+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:42.924056+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 49)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:42.913497+0000 osd.1 (osd.1) 48 : cluster [DBG] 3.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:42.924056+0000 osd.1 (osd.1) 49 : cluster [DBG] 3.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:14.333907+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:15.334148+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:44.980888+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:44.991501+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 51)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:44.980888+0000 osd.1 (osd.1) 50 : cluster [DBG] 7.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:44.991501+0000 osd.1 (osd.1) 51 : cluster [DBG] 7.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:16.334409+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:45.980643+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:45.991433+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458163 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 53)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:45.980643+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:45.991433+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:17.334672+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:18.334845+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1253376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:19.334985+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:49.063615+0000 osd.1 (osd.1) 54 : cluster [DBG] 3.1c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:49.074219+0000 osd.1 (osd.1) 55 : cluster [DBG] 3.1c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1236992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 55)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:49.063615+0000 osd.1 (osd.1) 54 : cluster [DBG] 3.1c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:49.074219+0000 osd.1 (osd.1) 55 : cluster [DBG] 3.1c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:20.335141+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:50.044952+0000 osd.1 (osd.1) 56 : cluster [DBG] 7.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:50.055487+0000 osd.1 (osd.1) 57 : cluster [DBG] 7.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 57)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:50.044952+0000 osd.1 (osd.1) 56 : cluster [DBG] 7.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:50.055487+0000 osd.1 (osd.1) 57 : cluster [DBG] 7.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:21.335358+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1220608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462989 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:22.335538+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.114359856s of 10.133749962s, submitted: 10
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:23.335659+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:53.047247+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:53.057785+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 59)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:53.047247+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:53.057785+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:24.335819+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1204224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:25.336005+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:55.117398+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:55.127981+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1196032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 61)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:55.117398+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:55.127981+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:26.336216+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:56.094484+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:56.105071+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1196032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 63)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:56.094484+0000 osd.1 (osd.1) 62 : cluster [DBG] 4.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:56.105071+0000 osd.1 (osd.1) 63 : cluster [DBG] 4.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470222 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:27.336432+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1187840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:28.336662+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:58.111166+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:58.121771+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1171456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 65)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:58.111166+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:58.121771+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:29.336942+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:59.133096+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.5 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:34:59.143639+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.5 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 67)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:59.133096+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.5 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:34:59.143639+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.5 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:30.337196+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:31.337413+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:01.160267+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.f scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:01.170783+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.f scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1138688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 69)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:01.160267+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.f scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:01.170783+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.f scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477455 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:32.337650+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:02.155401+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:02.165952+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1138688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 71)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:02.155401+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.14 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:02.165952+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.14 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:33.337848+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1138688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.985276222s of 11.152510643s, submitted: 14
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:34.338028+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:04.199781+0000 osd.1 (osd.1) 72 : cluster [DBG] 2.1b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:04.210375+0000 osd.1 (osd.1) 73 : cluster [DBG] 2.1b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 73)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:04.199781+0000 osd.1 (osd.1) 72 : cluster [DBG] 2.1b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:04.210375+0000 osd.1 (osd.1) 73 : cluster [DBG] 2.1b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:35.338297+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:36.338491+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:06.196954+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:06.207483+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 75)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:06.196954+0000 osd.1 (osd.1) 74 : cluster [DBG] 4.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:06.207483+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 484692 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:37.338762+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:38.338910+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:39.339099+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1073152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:40.339255+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1073152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:41.339386+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 484692 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:42.339500+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:12.242382+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.11 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:12.253008+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.11 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 77)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:12.242382+0000 osd.1 (osd.1) 76 : cluster [DBG] 5.11 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:12.253008+0000 osd.1 (osd.1) 77 : cluster [DBG] 5.11 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:43.339698+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:44.339826+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:45.340008+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.024221420s of 12.033978462s, submitted: 6
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:46.340137+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:16.233800+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:16.244349+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 79)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:16.233800+0000 osd.1 (osd.1) 78 : cluster [DBG] 4.10 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:16.244349+0000 osd.1 (osd.1) 79 : cluster [DBG] 4.10 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 489518 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:47.340331+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:48.340489+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:18.234010+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.13 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:18.244640+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.13 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 81)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:18.234010+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.13 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:18.244640+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.13 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:49.340690+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:50.341062+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:20.199085+0000 osd.1 (osd.1) 82 : cluster [DBG] 2.17 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:20.209558+0000 osd.1 (osd.1) 83 : cluster [DBG] 2.17 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 958464 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 83)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:20.199085+0000 osd.1 (osd.1) 82 : cluster [DBG] 2.17 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:20.209558+0000 osd.1 (osd.1) 83 : cluster [DBG] 2.17 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:51.341394+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 494344 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:52.341556+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:53.341721+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 950272 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:54.341871+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:55.342034+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:56.342206+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:26.025480+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:26.036079+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 85)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:26.025480+0000 osd.1 (osd.1) 84 : cluster [DBG] 4.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:26.036079+0000 osd.1 (osd.1) 85 : cluster [DBG] 4.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496757 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:57.342361+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:58.342522+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:59.342684+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:00.342807+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:01.342956+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.750560760s of 15.766418457s, submitted: 8
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 499170 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:02.343087+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:32.000090+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.15 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:32.010669+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.15 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 87)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:32.000090+0000 osd.1 (osd.1) 86 : cluster [DBG] 2.15 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:32.010669+0000 osd.1 (osd.1) 87 : cluster [DBG] 2.15 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:03.343246+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:04.343372+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:33.993219+0000 osd.1 (osd.1) 88 : cluster [DBG] 5.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:34.003815+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:05.343583+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 89)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:33.993219+0000 osd.1 (osd.1) 88 : cluster [DBG] 5.12 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:34.003815+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.12 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:06.343838+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501583 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:07.343998+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:08.344136+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:09.344264+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:10.344398+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:11.344534+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501583 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:12.344654+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.721431732s of 10.989896774s, submitted: 4
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:13.344796+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:42.990161+0000 osd.1 (osd.1) 90 : cluster [DBG] 5.16 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:43.000805+0000 osd.1 (osd.1) 91 : cluster [DBG] 5.16 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 91)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:42.990161+0000 osd.1 (osd.1) 90 : cluster [DBG] 5.16 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:43.000805+0000 osd.1 (osd.1) 91 : cluster [DBG] 5.16 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:14.345008+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:44.008516+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:44.019092+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 93)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:44.008516+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:44.019092+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:15.345199+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:16.345321+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 508818 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:17.345439+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:47.054098+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:47.064690+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:18.345622+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 4 last_log 97 sent 95 num 4 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:48.049907+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.5 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:48.060480+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.5 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 95)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:47.054098+0000 osd.1 (osd.1) 94 : cluster [DBG] 5.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:47.064690+0000 osd.1 (osd.1) 95 : cluster [DBG] 5.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 827392 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:19.345783+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 4 last_log 99 sent 97 num 4 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.021778+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.032359+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 97)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:48.049907+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.5 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:48.060480+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.5 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 99)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.021778+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.032359+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 819200 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:20.345985+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.981562+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.3 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:49.992147+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.3 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 101)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.981562+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.3 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:49.992147+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.3 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:21.346259+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518462 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:22.346470+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:51.950169+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:51.960692+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 103)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:51.950169+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:51.960692+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:23.346669+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.557197571s of 10.995463371s, submitted: 14
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:24.346854+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:53.985713+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:53.996311+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 105)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:53.985713+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.9 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:53.996311+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.9 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:25.347079+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:54.981080+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.6 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:54.991677+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.6 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 107)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:54.981080+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.6 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:54.991677+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.6 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:26.347274+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 523284 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:27.347455+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 745472 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:28.347603+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:58.045658+0000 osd.1 (osd.1) 108 : cluster [DBG] 2.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:35:58.056265+0000 osd.1 (osd.1) 109 : cluster [DBG] 2.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 109)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:58.045658+0000 osd.1 (osd.1) 108 : cluster [DBG] 2.7 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:35:58.056265+0000 osd.1 (osd.1) 109 : cluster [DBG] 2.7 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:29.347827+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:30.347995+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 729088 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:31.348226+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 525695 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:32.348363+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 720896 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:33.348547+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:34.348761+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:35.349000+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:36.349143+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 525695 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:37.349406+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:38.349570+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:39.349732+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:40.349849+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:41.350046+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.051336288s of 18.072429657s, submitted: 6
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528106 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:42.350203+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:12.057937+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:12.068515+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 111)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:12.057937+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:12.068515+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:43.350378+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:44.350582+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:45.350892+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:46.351050+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:16.083647+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.1 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:16.094211+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.1 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 113)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:16.083647+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.1 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:16.094211+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.1 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 530517 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:47.351317+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 655360 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:48.351443+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:49.351628+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:19.129637+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.1d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:19.140269+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.1d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 115)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:19.129637+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.1d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:19.140269+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.1d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:50.351837+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:51.352031+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:21.086997+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.f scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:21.097617+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.f scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 614400 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 117)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:21.086997+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.f scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:21.097617+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.f scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.031682968s of 10.049464226s, submitted: 8
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 537754 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:52.352249+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:22.107643+0000 osd.1 (osd.1) 118 : cluster [DBG] 5.1a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:22.118253+0000 osd.1 (osd.1) 119 : cluster [DBG] 5.1a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:53.352454+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 119)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:22.107643+0000 osd.1 (osd.1) 118 : cluster [DBG] 5.1a scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:22.118253+0000 osd.1 (osd.1) 119 : cluster [DBG] 5.1a scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:54.352590+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:55.352791+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:25.168247+0000 osd.1 (osd.1) 120 : cluster [DBG] 5.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:25.178706+0000 osd.1 (osd.1) 121 : cluster [DBG] 5.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 121)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:25.168247+0000 osd.1 (osd.1) 120 : cluster [DBG] 5.19 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:25.178706+0000 osd.1 (osd.1) 121 : cluster [DBG] 5.19 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:56.353026+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542580 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:57.353273+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:27.219385+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.18 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:27.230029+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.18 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 123)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:27.219385+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.18 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:27.230029+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.18 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:58.353487+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:59.353627+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:00.353860+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:30.192706+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:30.217352+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 125)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:30.192706+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.4 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:30.217352+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.4 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:01.354072+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 544991 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:02.354245+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 516096 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:03.354364+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 507904 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:04.354539+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 499712 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.036999702s of 13.050308228s, submitted: 8
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:05.354693+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:35.157930+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:35.172056+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 127)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:35.157930+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.b scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:35.172056+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.b scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:06.354916+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:36.188643+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.e scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:36.202879+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.e scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 129)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:36.188643+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.e scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:36.202879+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.e scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 549813 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:07.355174+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:08.355376+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:38.198302+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.1 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:38.208968+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.1 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 131)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:38.198302+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.1 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:38.208968+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.1 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:09.355599+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:10.355914+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:11.356064+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 552224 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:12.356271+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 475136 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:13.356411+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:14.356583+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.037598610s of 10.053073883s, submitted: 6
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:15.356861+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:45.211093+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.6 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:45.225192+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.6 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 133)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:45.211093+0000 osd.1 (osd.1) 132 : cluster [DBG] 6.6 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:45.225192+0000 osd.1 (osd.1) 133 : cluster [DBG] 6.6 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:16.357072+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557046 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:17.357244+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:47.131323+0000 osd.1 (osd.1) 134 : cluster [DBG] 6.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:47.141919+0000 osd.1 (osd.1) 135 : cluster [DBG] 6.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 135)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:47.131323+0000 osd.1 (osd.1) 134 : cluster [DBG] 6.2 scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:47.141919+0000 osd.1 (osd.1) 135 : cluster [DBG] 6.2 scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:18.357443+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:19.357647+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:20.357786+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:21.357948+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 559457 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:22.358097+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:52.221486+0000 osd.1 (osd.1) 136 : cluster [DBG] 6.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:52.239174+0000 osd.1 (osd.1) 137 : cluster [DBG] 6.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 137)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:52.221486+0000 osd.1 (osd.1) 136 : cluster [DBG] 6.d scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:52.239174+0000 osd.1 (osd.1) 137 : cluster [DBG] 6.d scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:23.358327+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:53.261470+0000 osd.1 (osd.1) 138 : cluster [DBG] 6.c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  will send 2025-12-01T20:36:53.275627+0000 osd.1 (osd.1) 139 : cluster [DBG] 6.c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client handle_log_ack log(last 139)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:53.261470+0000 osd.1 (osd.1) 138 : cluster [DBG] 6.c scrub starts
Dec 01 21:03:47 compute-0 ceph-osd[87692]: log_client  logged 2025-12-01T20:36:53.275627+0000 osd.1 (osd.1) 139 : cluster [DBG] 6.c scrub ok
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:24.358529+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:25.358730+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:26.358900+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:27.359048+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:28.359162+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:29.359297+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:30.359494+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:31.359635+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:32.359830+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:33.359969+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:34.360096+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:35.360306+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:36.360464+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:37.360601+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:38.360754+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:39.360928+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:40.361041+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:41.361188+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:42.361326+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:43.361607+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:44.361935+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:45.362308+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 335872 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:46.362473+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:47.362660+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 327680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:48.362807+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:49.362960+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:50.363093+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:51.363230+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:52.363377+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:53.363555+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:54.363705+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:55.363900+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:56.364071+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:57.364233+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:58.364354+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:59.364509+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:00.364668+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:01.364804+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:02.364931+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:03.365050+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:04.365160+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 237568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:05.365324+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:06.365443+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:07.365559+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:08.365706+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:09.365842+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:10.365986+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:11.366156+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:12.366339+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:13.366543+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:14.366654+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:15.366853+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:16.366988+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:17.367120+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:18.367249+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:19.367370+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:20.367503+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:21.367832+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:22.367981+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:23.368091+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:24.368314+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:25.368529+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:26.368645+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 163840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:27.368772+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:28.368926+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:29.369064+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:30.369257+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:31.369410+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:32.369523+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:33.369651+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:34.369799+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:35.370036+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:36.370245+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:37.370377+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:38.370531+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:39.370685+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:40.370839+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:41.371028+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:42.371217+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:43.371351+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:44.371514+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:45.371707+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:46.371881+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:47.372051+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:48.372202+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 90112 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:49.372353+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:50.372509+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:51.372642+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 81920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:52.372764+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:53.372862+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:54.372970+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:55.373238+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:56.373371+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:57.373485+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:58.373604+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:59.373744+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:00.373896+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:01.374039+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:02.374246+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:03.374370+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:04.374549+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:05.374956+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:06.375112+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:07.375240+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:08.375352+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:09.375612+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:10.375760+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:11.375947+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:12.376088+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:13.376223+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:14.376363+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:15.376588+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:16.376756+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 1048576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:17.376903+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:18.377026+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:19.377149+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:20.377275+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:21.377413+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:22.377551+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/842712939' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 01 21:03:47 compute-0 ceph-mon[75880]: from='client.15080 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:47 compute-0 ceph-mon[75880]: pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:47 compute-0 ceph-mon[75880]: from='client.15082 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:23.377699+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:24.377919+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:25.378217+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:26.378394+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:27.378573+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:28.378717+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:29.378826+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:30.378940+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:31.379076+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:32.379210+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:33.379382+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:34.379503+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:35.379795+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:36.379981+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:37.380106+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:38.380249+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:39.380388+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:40.380523+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:41.380642+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:42.380785+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:43.380906+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:44.381047+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:45.381263+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:46.381386+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:47.381503+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:48.381624+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:49.381743+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:50.381878+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:51.381967+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:52.382106+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:53.382256+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:54.382407+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:55.382560+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:56.382725+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:57.382909+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:58.383116+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:59.383266+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:00.383398+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:01.383625+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:02.383757+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:03.383893+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:04.384055+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:05.384286+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:06.384454+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:07.384681+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:08.384902+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:09.385101+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:10.385239+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:11.385360+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:12.385499+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:13.385651+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:14.385814+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:15.386011+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:16.386145+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:17.386236+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:18.386360+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:19.386491+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:20.386631+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:21.386768+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:22.386955+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:23.387058+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:24.387197+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:25.387357+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:26.387487+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:27.387618+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:28.387775+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:29.388010+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:30.388141+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:31.388273+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:32.389108+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:33.389261+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:34.389422+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:35.389646+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:36.389832+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:37.390137+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:38.390921+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:39.391449+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:40.391697+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:41.391860+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:42.392065+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:43.392266+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:44.392400+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:45.392581+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:46.392693+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:47.392811+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:48.392945+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:49.393114+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:50.393248+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:51.393431+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:52.393679+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:53.393853+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:54.394048+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:55.394252+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:56.394368+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:57.394504+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:58.394727+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:59.394861+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:00.395044+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:01.395171+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:02.395360+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:03.395497+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:04.395638+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:05.395787+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:06.395914+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:07.396051+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:08.396206+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:09.396347+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:10.396466+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:11.396616+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:12.396753+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:13.396965+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:14.397096+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:15.397327+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:16.397486+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:17.397619+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:18.397759+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:19.397877+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:20.398004+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:21.398233+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:22.398380+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:23.398517+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:24.398669+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:25.399585+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:26.399751+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:27.399924+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:28.400078+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:29.400258+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:30.400421+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:31.400564+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 606208 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:32.400713+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:33.400857+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:34.401019+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:35.401897+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:36.402027+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:37.402272+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:38.402562+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:39.402739+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:40.402879+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:41.403400+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:42.403579+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:43.403750+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:44.403961+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:45.404153+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:46.404280+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:47.404412+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:48.404535+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:49.404670+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:50.404790+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:51.404914+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:52.405081+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:53.405230+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:54.405367+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:55.405591+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:56.405746+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:57.405906+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:58.406040+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:59.406249+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:00.406457+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:01.406624+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:02.406815+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:03.407005+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:04.407171+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:05.407462+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:06.407629+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:07.407811+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:08.408010+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:09.408225+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:10.408447+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:11.408614+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:12.408817+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:13.409007+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:14.409238+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:15.409462+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:16.409638+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:17.409831+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:18.410006+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:19.410146+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:20.410318+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:21.410464+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:22.410625+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:23.410827+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:24.411038+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:25.411229+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:26.411359+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:27.411504+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:28.411645+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:29.411830+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:30.412015+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:31.412220+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:32.412402+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:33.412552+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:34.412699+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:35.412932+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:36.413104+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:37.413269+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:38.414087+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:39.414330+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:40.414458+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:41.414592+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:42.414948+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:43.415169+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:44.415328+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:45.415567+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:46.415973+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 360448 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:47.416159+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:48.416307+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 352256 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:49.416431+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:50.416574+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 344064 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 16.59 MB, 0.03 MB/s
                                           Interval WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:51.416973+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:52.417225+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:53.417460+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:54.417691+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:55.418149+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:56.418507+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:57.418683+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:58.418914+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:59.419280+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:00.419596+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:01.419821+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:02.419985+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:03.420348+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:04.420551+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:05.420863+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:06.421268+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:07.421579+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:08.421838+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:09.422032+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:10.422511+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:11.422641+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:12.422801+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:13.422968+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:14.423107+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:15.423239+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:16.423515+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:17.423881+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:18.424571+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:19.425147+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:20.425406+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:21.425770+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:22.426375+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:23.426734+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:24.427024+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:25.427700+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:26.428022+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:27.428521+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:28.428664+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:29.428952+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:30.429444+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:31.429761+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:32.429947+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:33.430268+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:34.430645+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:35.430958+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:36.431243+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:37.431444+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:38.431680+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:39.431983+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:40.432253+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:41.432513+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:42.432732+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:43.432901+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:44.433044+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:45.433243+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:46.433396+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:47.433535+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:48.433656+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:49.433787+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:50.434000+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:51.434168+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:52.434304+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:53.434433+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:54.434572+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:55.434801+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:56.434955+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:57.435096+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:58.435280+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:59.435449+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:00.435592+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:01.435779+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:02.435981+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:03.436131+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:04.436268+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:05.436415+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:06.436607+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:07.436784+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:08.436949+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:09.437125+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:10.437250+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:11.437367+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:12.437546+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:13.437715+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:14.437841+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:15.438070+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:16.438222+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:17.438424+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:18.438565+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:19.438703+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:20.438840+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:21.439038+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:22.439211+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:23.439362+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:24.439515+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:25.439704+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:26.439868+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:27.440044+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:28.440242+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:29.440369+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:30.440542+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:31.440682+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:32.440799+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:33.440936+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:34.441099+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:35.441259+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:36.441429+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:37.441578+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:38.441953+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:39.442137+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:40.442349+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:41.442510+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 966656 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:42.444333+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:43.444542+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:44.444671+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:45.445051+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:46.445270+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:47.445420+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:48.445550+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:49.445673+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:50.445818+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:51.445984+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:52.446162+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:53.446249+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:54.446364+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:55.446522+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:56.446662+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:57.446883+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:58.447104+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:59.447253+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:00.447370+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:01.447503+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:02.447657+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:03.447755+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:04.447952+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:05.448541+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:06.448781+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:07.448922+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:08.449090+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:09.449289+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:10.449495+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:11.449671+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:12.449834+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:13.449958+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:14.450086+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:15.450256+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:16.450379+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:17.450537+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:18.450678+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:19.450854+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:20.450991+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:21.451215+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:22.451375+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:23.451513+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:24.451640+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:25.451886+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:26.452036+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:27.452205+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:28.452324+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:29.452471+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:30.452609+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:31.452722+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:32.452854+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:33.453030+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:34.453141+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:35.453390+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:36.453493+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:37.453599+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:38.453717+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:39.453928+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:40.454064+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:41.454221+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:42.454396+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:43.454506+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:44.454640+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:45.454793+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:46.454925+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:47.455111+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:48.455242+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:49.455440+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:50.455634+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:51.455759+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:52.455889+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:53.456047+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:54.456235+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:55.456418+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:56.456545+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:57.456692+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:58.456910+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:59.457051+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:00.457211+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:01.457366+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:02.457494+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:03.457634+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:04.457757+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:05.457990+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:06.458218+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:07.458344+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:08.458482+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:09.458614+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:10.458773+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:11.458951+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:12.459118+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:13.459279+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:14.459437+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:15.459641+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:16.459788+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:17.460099+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:18.460241+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:19.460409+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:20.460527+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:21.460679+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:22.460801+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:23.461014+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:24.461155+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:25.461313+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:26.461477+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:27.461625+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:28.461756+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:29.461936+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:30.462078+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:31.462263+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:32.462455+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:33.462603+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:34.462771+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:35.462963+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:36.463136+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:37.463258+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:38.463415+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:39.463570+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:40.463732+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:41.464104+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:42.464259+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:43.464386+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:44.464509+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:45.464678+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:46.464820+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:47.464936+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:48.465337+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:49.465534+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:50.465649+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:51.465951+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:52.466115+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:53.466316+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:54.466453+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:55.466655+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:56.466844+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:57.467384+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:58.467804+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:59.468151+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:00.468474+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:01.468801+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:02.468986+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:03.469233+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:04.469392+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:05.469592+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:06.469718+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:07.469894+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:08.470059+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:09.470271+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:10.470393+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:11.470617+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:12.470826+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:13.471001+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:14.471245+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:15.471520+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:16.471739+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:17.471935+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:18.472147+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:19.472264+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:20.472432+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:21.472603+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:22.472841+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:23.472965+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:24.473149+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:25.473355+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:26.473527+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:27.473681+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:28.474300+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:29.474437+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:30.474699+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:31.474807+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:32.474924+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:33.475150+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:34.475351+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:35.475522+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:36.475629+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:37.475778+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:38.475944+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:39.476079+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:40.476278+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:41.476426+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:42.476538+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:43.476676+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:44.476837+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:45.477028+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:46.477254+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:47.477463+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:48.477696+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:49.477844+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:50.477968+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:51.478144+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:52.478235+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:53.478411+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:54.478576+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:55.478790+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:56.478923+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc ms_handle_reset ms_handle_reset con 0x563147b44000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: get_auth_request con 0x563148678400 auth_method 0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc handle_mgr_configure stats_period=5
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:57.479083+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 ms_handle_reset con 0x563146fb8400 session 0x56314785cc40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 ms_handle_reset con 0x563148147c00 session 0x5631480d88c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:58.479313+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 385024 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:59.479479+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 385024 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:00.479609+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:01.479806+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:02.480019+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:03.480197+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:04.480392+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:05.480604+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:06.480762+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:07.480896+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:08.481006+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:09.481168+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:10.481366+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:11.481545+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:12.481715+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:13.481937+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:14.482124+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:15.482565+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:16.482729+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:17.482959+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:18.483258+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:19.483415+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:20.483723+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:21.483935+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:22.513903+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:23.514052+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:24.514394+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:25.515308+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:26.515535+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:27.515749+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:28.515952+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:29.516224+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:30.516422+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:31.516595+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:32.516830+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:33.517025+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:34.517232+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:35.517514+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:36.517778+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:37.517989+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:38.518159+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:39.518346+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:40.518515+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:41.518672+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:42.518838+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:43.519011+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:44.519208+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:45.519396+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:46.519525+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:47.519690+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:48.519850+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:49.520001+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:50.520135+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:51.520257+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:52.520440+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:53.520644+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:54.520786+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:55.521207+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:56.521384+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:57.521542+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:58.521701+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:59.521878+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:00.522112+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:01.522251+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:02.522369+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:03.522512+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:04.522652+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:05.522941+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:06.523231+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:07.523506+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:08.523671+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:09.523841+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:10.524063+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:11.524306+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:12.524481+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:13.524642+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:14.524823+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:15.525038+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:16.525347+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:17.525601+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:18.525756+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:19.525989+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:20.526266+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:21.526501+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:22.526741+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:23.526924+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:24.527079+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:25.527308+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 360448 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:26.527510+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:27.527710+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:28.527893+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:29.528080+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:30.528235+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:31.528391+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:32.528599+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:33.528753+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:34.528925+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:35.529152+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:36.529340+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:37.529542+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:38.529744+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:39.529918+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:40.530150+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:41.530377+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:42.530582+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:43.530943+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:44.531258+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:45.531543+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:46.531731+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:47.531922+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:48.532069+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:49.532253+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:50.532456+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:51.532748+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:52.532945+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:53.533089+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:54.533277+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:55.533621+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:56.533817+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:57.534034+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:58.534272+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:59.534428+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:00.534676+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:01.534947+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:02.535248+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:03.535484+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:04.535709+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186377781' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:05.536237+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:06.536513+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:07.536729+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:08.536912+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:09.537287+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:10.537535+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:11.539429+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:12.540244+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:13.540786+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:14.546807+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:15.551716+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:16.551945+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:17.552537+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:18.552718+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:19.553585+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:20.553786+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:21.554086+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:22.554441+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:23.554606+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:24.554898+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:25.555117+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:26.555405+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:27.555694+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:28.555848+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:29.556117+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:30.556373+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:31.556692+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:32.556934+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:33.557120+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:34.557280+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:35.557444+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:36.557555+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:37.557698+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:38.557842+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:39.558026+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:40.558209+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:41.558380+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:42.558494+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:43.558637+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:44.558810+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:45.558997+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:46.559126+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:47.559223+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:48.559440+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:49.559586+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:50.559708+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:51.562321+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:52.562445+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:53.562581+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:54.562703+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:55.562849+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:56.562994+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:57.563155+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:58.563291+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:59.563415+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:00.563520+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:01.563642+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:02.563782+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:03.563933+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:04.564094+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:05.564285+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:06.564436+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:07.564609+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:08.564747+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:09.564875+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:10.565046+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:11.565207+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:12.565381+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:13.565526+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:14.565688+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:15.565845+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:16.565953+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:17.566108+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:18.566298+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:19.566521+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:20.566776+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:21.566951+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:22.567098+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:23.567223+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:24.567398+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:25.567616+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:26.567758+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:27.567927+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:28.568046+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:29.568235+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:30.568397+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:31.568535+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:32.568695+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:33.568850+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:34.569001+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:35.569150+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:36.569280+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:37.569445+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:38.569598+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:39.569733+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:40.569880+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 417792 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:41.570024+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:42.570144+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:43.570356+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:44.570660+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:45.571059+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:46.571256+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:47.571423+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:48.571610+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:49.571845+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:50.572020+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:51.572121+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:52.572302+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:53.572451+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:54.572617+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:55.572810+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:56.572932+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:57.573095+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:58.573215+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:59.573370+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:00.573500+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:01.573624+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:02.573744+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 409600 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:03.573865+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:04.573983+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:05.574097+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:06.574223+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:07.574339+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:08.574469+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:09.574558+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:10.574682+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:11.575019+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:12.575142+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:13.575334+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:14.575488+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:15.575689+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:16.575805+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:17.575983+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:18.576113+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:19.576219+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:20.576463+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:21.576616+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:22.576797+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:23.577024+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread fragmentation_score=0.000121 took=0.000015s
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:24.577311+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:25.577573+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:26.577715+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:27.577948+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:28.578079+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:29.578205+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:30.578362+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:31.578530+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:32.578694+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:33.578834+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:34.578976+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:35.579227+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:36.579404+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:37.579505+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:38.579622+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:39.579770+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:40.579955+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:41.580096+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:42.580264+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:43.580437+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:44.580642+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:45.580881+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:46.581057+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:47.581289+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:48.581457+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:49.581642+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 401408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:50.581846+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 4512 writes, 20K keys, 4512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4512 writes, 503 syncs, 8.97 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e09a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563145e098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:51.581999+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:52.582154+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:53.582378+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:54.582493+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:55.582724+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:56.582930+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:57.583091+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:58.583277+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:59.583376+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:00.583449+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:01.583599+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:02.583718+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 heartbeat osd_stat(store_statfs(0x4fe0aa000/0x0/0x4ffc00000, data 0xb913b/0x122000, compress 0x0/0x0/0x0, omap 0xb020, meta 0x1a24fe0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:03.583840+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:04.584028+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 368640 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:05.584293+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 561868 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 950.738098145s of 950.755859375s, submitted: 8
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 237568 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:06.584475+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xba710/0x126000, compress 0x0/0x0/0x0, omap 0xb2ac, meta 0x1a24d54), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 16924672 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:07.584594+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 16924672 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:08.584707+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 16924672 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:09.584839+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 67 ms_handle_reset con 0x56314828a400 session 0x56314785d880
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 16916480 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:10.585003+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 616850 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 16834560 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fd899000/0x0/0x4ffc00000, data 0x8bd335/0x92d000, compress 0x0/0x0/0x0, omap 0xb7d0, meta 0x1a24830), peers [0,2] op hist [0,0,0,0,1])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:11.585166+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 25051136 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:12.585371+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 ms_handle_reset con 0x5631474f9800 session 0x5631499e2a80
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:13.585559+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09d000/0x0/0x4ffc00000, data 0x10bd368/0x112f000, compress 0x0/0x0/0x0, omap 0xb7d0, meta 0x1a24830), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:14.585705+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:15.585847+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:16.585968+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:17.586122+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:18.586293+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:19.586442+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:20.586610+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:21.586766+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:22.586944+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:23.587248+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:24.587445+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:25.587689+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:26.587863+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:27.588050+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:28.588238+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:29.588435+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:30.588595+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:31.588787+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:32.589005+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:33.589227+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:34.589428+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:35.589625+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:36.589794+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:37.589950+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:38.590108+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:39.590284+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:40.590456+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:41.590615+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:42.590779+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:43.590952+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:44.591135+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:45.591369+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69378048 unmapped: 25001984 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd098000/0x0/0x4ffc00000, data 0x10be96d/0x1132000, compress 0x0/0x0/0x0, omap 0xba6b, meta 0x1a24595), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663445 data_alloc: 218103808 data_used: 915
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.565486908s of 40.443881989s, submitted: 44
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:46.591509+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 24780800 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 69 ms_handle_reset con 0x563149b10800 session 0x563149a36e00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:47.591675+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 24772608 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:48.591799+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 24772608 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fc896000/0x0/0x4ffc00000, data 0x18bff5d/0x1936000, compress 0x0/0x0/0x0, omap 0xc29c, meta 0x1a23d64), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:49.592016+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 32800768 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:50.592286+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 32587776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x563149b2d000 session 0x563148cd6a80
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x563149af2800 session 0x56314941ca80
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969898 data_alloc: 218103808 data_used: 934
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:51.592459+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 70 heartbeat osd_stat(store_statfs(0x4f988e000/0x0/0x4ffc00000, data 0x48c195d/0x493c000, compress 0x0/0x0/0x0, omap 0xc67d, meta 0x1a23983), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 31776768 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x5631474f9800 session 0x56314751c700
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 70 ms_handle_reset con 0x563149b10800 session 0x5631485e2380
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 71 ms_handle_reset con 0x563148b5a000 session 0x563145e2efc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149f17c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1fc00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:52.592590+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 30236672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x563149d1fc00 session 0x5631476a1180
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x563149f17c00 session 0x563148cd6fc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x5631474f9800 session 0x563149550540
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 72 ms_handle_reset con 0x563148b5a000 session 0x563149a1da40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:53.592755+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 30343168 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1ec00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fd08a000/0x0/0x4ffc00000, data 0x10c459f/0x1142000, compress 0x0/0x0/0x0, omap 0xd785, meta 0x1a2287b), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:54.592900+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 30343168 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 73 ms_handle_reset con 0x563149d1ec00 session 0x5631476a1a40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:55.593067+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 30212096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698182 data_alloc: 218103808 data_used: 4995
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 74 ms_handle_reset con 0x563149d1f000 session 0x563148cd7340
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.431722641s of 10.013735771s, submitted: 196
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:56.593274+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 29966336 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fd084000/0x0/0x4ffc00000, data 0x10c6d9b/0x1144000, compress 0x0/0x0/0x0, omap 0xdfa9, meta 0x1a22057), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 75 ms_handle_reset con 0x563149d1f400 session 0x563149a36e00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fd081000/0x0/0x4ffc00000, data 0x10c87a7/0x1149000, compress 0x0/0x0/0x0, omap 0xe240, meta 0x1a21dc0), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:57.593471+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 30097408 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:58.593640+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 30081024 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 76 ms_handle_reset con 0x5631474f9800 session 0x563149a36380
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:59.593824+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 30031872 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 77 ms_handle_reset con 0x563148b5a000 session 0x5631478cc1c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:00.594073+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 29908992 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713014 data_alloc: 218103808 data_used: 4995
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:01.594237+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 29908992 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:02.594468+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fd07a000/0x0/0x4ffc00000, data 0x10cb3c1/0x114f000, compress 0x0/0x0/0x0, omap 0xe6b7, meta 0x1a21949), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 29908992 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:03.594622+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1ec00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 78 ms_handle_reset con 0x563149d1ec00 session 0x56314941d500
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 29720576 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:04.594796+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1fc00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 29523968 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fd078000/0x0/0x4ffc00000, data 0x10cc88d/0x1152000, compress 0x0/0x0/0x0, omap 0xe9a1, meta 0x1a2165f), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1fc00 session 0x5631485e2fc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1f800 session 0x563149a36fc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1f000 session 0x563149a37500
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 80 ms_handle_reset con 0x563149d1f800 session 0x563149a36540
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:05.595056+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 27254784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 732442 data_alloc: 218103808 data_used: 4995
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 81 ms_handle_reset con 0x563149b10800 session 0x563147963dc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:06.595238+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149f17c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 27181056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.240373611s of 10.403330803s, submitted: 106
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 82 ms_handle_reset con 0x563149f17c00 session 0x563148152000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:07.595430+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 27164672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fbeca000/0x0/0x4ffc00000, data 0x10d0c86/0x1160000, compress 0x0/0x0/0x0, omap 0xfb85, meta 0x2bc047b), peers [0,2] op hist [0,0,0,0,1])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 83 ms_handle_reset con 0x563149b2d000 session 0x56314785c8c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:08.595577+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 26984448 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:09.595727+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 84 ms_handle_reset con 0x563149b10800 session 0x5631499e3dc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 26951680 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:10.595917+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 26886144 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 758587 data_alloc: 218103808 data_used: 4995
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 85 ms_handle_reset con 0x563149d1f000 session 0x563149a09340
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:11.596133+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 26869760 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbebb000/0x0/0x4ffc00000, data 0x10d6066/0x116f000, compress 0x0/0x0/0x0, omap 0x10872, meta 0x2bbf78e), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:12.596294+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 26869760 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:13.596428+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbeb6000/0x0/0x4ffc00000, data 0x10d764f/0x1172000, compress 0x0/0x0/0x0, omap 0x10b7c, meta 0x2bbf484), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 26804224 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:14.596754+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 26730496 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 87 ms_handle_reset con 0x563149d1f800 session 0x563149a1ce00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:15.597025+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149f17c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 25788416 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768219 data_alloc: 218103808 data_used: 5011
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 88 ms_handle_reset con 0x56314828a400 session 0x563148cd6e00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 88 ms_handle_reset con 0x563149f17c00 session 0x563148cd7880
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fbeb3000/0x0/0x4ffc00000, data 0x10d9c1a/0x1176000, compress 0x0/0x0/0x0, omap 0x11751, meta 0x2bbe8af), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:16.597202+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b10800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 25706496 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.924459457s of 10.438578606s, submitted: 180
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 89 ms_handle_reset con 0x56314828a400 session 0x5631499e2540
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:17.597401+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 25690112 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fbeac000/0x0/0x4ffc00000, data 0x10dbeab/0x1178000, compress 0x0/0x0/0x0, omap 0x11fe9, meta 0x2bbe017), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:18.597673+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 25681920 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:19.597863+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 90 ms_handle_reset con 0x563149d1f800 session 0x563149475c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 25870336 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:20.597999+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 25567232 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 91 ms_handle_reset con 0x563149cf0400 session 0x5631481528c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775706 data_alloc: 218103808 data_used: 17706
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:21.598329+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 91 ms_handle_reset con 0x563149af2800 session 0x56314751cc40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 25395200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fbeaf000/0x0/0x4ffc00000, data 0x10dec89/0x117d000, compress 0x0/0x0/0x0, omap 0x129ba, meta 0x2bbd646), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:22.598515+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 25247744 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 92 ms_handle_reset con 0x563149cf0800 session 0x563149a368c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 92 ms_handle_reset con 0x563149cf0c00 session 0x563148152c40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:23.598669+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 25206784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:24.598796+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 93 ms_handle_reset con 0x56314828a400 session 0x563149a1d880
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 25206784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 93 ms_handle_reset con 0x563149af2800 session 0x563148153c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:25.598983+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 25182208 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 94 ms_handle_reset con 0x563149cf0400 session 0x5631485c56c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782580 data_alloc: 218103808 data_used: 18638
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fbea6000/0x0/0x4ffc00000, data 0x10e2db0/0x1184000, compress 0x0/0x0/0x0, omap 0x1365b, meta 0x2bbc9a5), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:26.599172+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 25182208 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.109507561s of 10.018264771s, submitted: 188
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 95 ms_handle_reset con 0x563149d1f800 session 0x5631476a16c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:27.599395+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 25149440 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 96 ms_handle_reset con 0x56314828a400 session 0x563149a36c40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:28.599602+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 25141248 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:29.599735+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 25116672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fbe9a000/0x0/0x4ffc00000, data 0x10e6eec/0x118c000, compress 0x0/0x0/0x0, omap 0x14125, meta 0x2bbbedb), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:30.599849+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 25133056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791338 data_alloc: 218103808 data_used: 19251
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149af2800 session 0x5631485c5a40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0400 session 0x56314751c8c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0c00 session 0x563148cd7a40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:31.599991+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf1000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf1000 session 0x5631485c4540
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149af2800 session 0x5631478ccc40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x56314828a400 session 0x5631478cda40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 24969216 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0400 session 0x5631499e2c40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x563149cf0c00 session 0x5631485c5dc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631474f9800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x5631474f9800 session 0x563147987a40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 ms_handle_reset con 0x56314828a400 session 0x563149a376c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:32.600113+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 24936448 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:33.600264+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 98 ms_handle_reset con 0x563149af2800 session 0x5631499e2380
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 24616960 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149cf0c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:34.600368+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:35.600500+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802682 data_alloc: 218103808 data_used: 21299
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fbe75000/0x0/0x4ffc00000, data 0x110c455/0x11b5000, compress 0x0/0x0/0x0, omap 0x14912, meta 0x2bbb6ee), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:36.600641+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:37.600835+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 24707072 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.560202599s of 10.949222565s, submitted: 79
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 99 ms_handle_reset con 0x563148b5a000 session 0x563149a36a80
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:38.600949+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 24625152 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x5631478e1800 session 0x563148662a80
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148613800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x563148613800 session 0x563147986380
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x563149b2d000 session 0x5631486628c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:39.601061+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148613800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 100 ms_handle_reset con 0x563148613800 session 0x5631485c5880
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 24264704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fbe72000/0x0/0x4ffc00000, data 0x110d905/0x11b8000, compress 0x0/0x0/0x0, omap 0x14ba6, meta 0x2bbb45a), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 101 ms_handle_reset con 0x5631478e1800 session 0x5631499e3340
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:40.601246+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 24231936 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817791 data_alloc: 218103808 data_used: 25461
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:41.601416+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x56314828a400 session 0x563147987880
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 24223744 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:42.601588+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 24207360 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148b5a000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x563148b5a000 session 0x5631476a0700
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x5631478e1800 session 0x5631478cc000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x56314828a400 session 0x5631480d9500
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:43.601735+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 24190976 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148613800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:44.601841+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 102 ms_handle_reset con 0x563148613800 session 0x5631476a1500
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 24182784 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:45.602021+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 102 heartbeat osd_stat(store_statfs(0x4fbe69000/0x0/0x4ffc00000, data 0x1111eec/0x11c3000, compress 0x0/0x0/0x0, omap 0x15f55, meta 0x2bba0ab), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79650816 unmapped: 23126016 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 103 ms_handle_reset con 0x563149b2d000 session 0x563148cd6c40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819548 data_alloc: 218103808 data_used: 25461
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:46.602160+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 103 ms_handle_reset con 0x563149cf0400 session 0x5631479876c0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 103 ms_handle_reset con 0x563149cf0c00 session 0x56314941c540
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:47.602309+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.259161949s of 10.002922058s, submitted: 116
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:48.602437+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fbe63000/0x0/0x4ffc00000, data 0x11145e5/0x11c7000, compress 0x0/0x0/0x0, omap 0x16900, meta 0x2bb9700), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 105 ms_handle_reset con 0x5631478e1800 session 0x563149550c40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:49.602568+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 22962176 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fbe83000/0x0/0x4ffc00000, data 0x10f1ba1/0x11a4000, compress 0x0/0x0/0x0, omap 0x1703d, meta 0x2bb8fc3), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:50.602685+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 22962176 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822506 data_alloc: 218103808 data_used: 19251
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:51.602847+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 22953984 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:52.603035+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 22953984 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:53.603223+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79822848 unmapped: 22953984 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x563149b10800 session 0x56314751c540
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x563149d1f000 session 0x563148663dc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:54.603371+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x56314828a400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x56314828a400 session 0x563149a36c40
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:55.603544+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x5631478e1800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 106 ms_handle_reset con 0x5631478e1800 session 0x563149551c00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79691776 unmapped: 23085056 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149d1f000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823483 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0x10f306d/0x11a7000, compress 0x0/0x0/0x0, omap 0x175cd, meta 0x2bb8a33), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:56.603702+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23068672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 107 ms_handle_reset con 0x563149d1f000 session 0x5631485c4540
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:57.603842+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79708160 unmapped: 23068672 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:58.604000+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.773540497s of 10.412032127s, submitted: 169
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:59.604162+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:00.604361+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829180 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:01.604546+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fbe7b000/0x0/0x4ffc00000, data 0x10f5b6a/0x11ad000, compress 0x0/0x0/0x0, omap 0x17e67, meta 0x2bb8199), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:02.604749+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:03.604941+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79716352 unmapped: 23060480 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:04.605113+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:05.605267+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:06.605391+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:07.605493+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79724544 unmapped: 23052288 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:08.605640+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:09.605786+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:10.605932+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:11.606108+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:12.606286+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:13.606436+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:14.606566+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:15.606806+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:16.607007+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:17.607310+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:18.607459+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:19.607624+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:20.607780+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:21.607951+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:22.608098+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:23.608313+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79732736 unmapped: 23044096 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:24.608530+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:25.608758+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:26.608961+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:27.609143+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:28.609257+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:29.609407+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:30.609514+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:31.609794+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:32.610071+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:33.610335+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:34.610502+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:35.610796+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:36.610997+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:37.611253+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:38.611453+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:39.611612+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:40.611865+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:41.612081+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:42.612239+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:43.612406+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:44.612591+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:45.612800+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:46.612990+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:47.613166+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:48.613401+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:49.614844+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:50.615009+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:51.615122+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:52.615280+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:53.615435+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:54.615588+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:55.615741+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:56.615899+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:57.616093+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:58.616279+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:59.616430+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:00.616600+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:01.616774+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:02.617006+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79740928 unmapped: 23035904 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:03.617216+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:04.617347+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:05.617515+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:06.617681+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:07.617935+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:08.618134+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:09.618391+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:10.618561+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:11.618750+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:12.618958+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:13.619243+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:14.619393+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:15.620668+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:16.622078+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:17.622749+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:18.622962+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:19.623222+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:20.623486+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:21.623711+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:22.624026+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:23.624582+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:24.624805+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:25.643270+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79699968 unmapped: 23076864 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:26.643453+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 79855616 unmapped: 22921216 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:27.643671+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config diff' '{prefix=config diff}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config show' '{prefix=config show}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 22732800 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:28.643904+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 22519808 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:29.644147+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 22519808 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:30.650316+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'log dump' '{prefix=log dump}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80257024 unmapped: 22519808 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'perf dump' '{prefix=perf dump}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:31.650518+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'perf schema' '{prefix=perf schema}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:32.650706+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:33.650957+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:34.651163+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:35.651368+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:36.651522+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:37.651651+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:38.651808+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:39.651983+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:40.652147+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:41.652280+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:42.652451+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:43.652575+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:44.652715+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:45.652905+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:46.653075+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:47.653245+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:48.653381+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:49.653522+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:50.653664+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:51.653788+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:52.653926+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:53.654046+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:54.654187+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:55.654345+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:56.655069+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:57.655734+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:58.655975+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:59.656105+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:00.656239+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:01.656374+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:02.656655+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:03.656815+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:04.656958+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:05.657158+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:06.657308+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:07.657455+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:08.657613+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:09.657807+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:10.658326+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:11.658497+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:12.658705+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:13.658956+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:14.659235+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:15.659489+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:16.659623+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:17.659765+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:18.660033+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:19.660320+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:20.660601+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:21.660874+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:22.669279+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:23.669472+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:24.669713+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:25.669991+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:26.670295+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:27.670521+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:28.670706+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:29.670930+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:30.671223+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:31.671432+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:32.671683+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:33.671846+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:34.672000+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:35.672265+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:36.672439+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:37.672590+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:38.672758+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:39.673018+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:40.673232+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80347136 unmapped: 22429696 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:41.673439+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:42.673621+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:43.673841+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:44.674067+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:45.674495+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:46.674645+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:47.674813+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:48.675005+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:49.675168+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:50.675462+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:51.675664+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:52.675886+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:53.676122+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:54.676376+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:55.676607+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:56.676797+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:57.677052+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:58.677298+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:59.677481+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:00.677669+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:01.677822+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:02.677960+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 22421504 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:03.678089+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:04.678262+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:05.678461+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:06.678659+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:07.678832+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:08.678973+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:09.679122+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:10.679351+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:11.679508+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:12.679666+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:13.679847+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:14.680028+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:15.680270+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:16.680452+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:17.680599+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:18.680764+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:19.681169+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:20.681479+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:21.681647+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 22413312 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:22.681845+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:23.681999+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:24.682144+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:25.682705+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:26.682838+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:27.682995+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:28.683136+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:29.683291+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:30.683421+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:31.683674+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 22405120 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:32.683843+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:33.683975+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:34.684141+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:35.684332+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:36.684514+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:37.684706+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:38.684877+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:39.685088+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:40.685289+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:41.685521+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:42.685727+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:43.685932+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:44.686167+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:45.686395+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:46.686613+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:47.686772+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:48.687015+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:49.687238+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:50.687415+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:51.687564+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:52.687766+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:53.687931+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:54.688130+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:55.688367+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:56.688603+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:57.688760+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:58.688951+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:59.689146+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:00.689391+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:01.689563+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:02.689722+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:03.689906+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:04.690086+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 22396928 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:05.691478+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:06.691614+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:07.691821+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:08.692036+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:09.692342+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:10.692575+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:11.692788+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:12.693040+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:13.693317+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:14.693613+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:15.693884+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:16.695020+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:17.696092+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:18.697058+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 22388736 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:19.697570+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:20.697797+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:21.698044+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:22.698302+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:23.698565+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:24.699062+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:25.699551+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:26.699812+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:27.701034+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:28.701366+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:29.701618+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:30.702461+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:31.702919+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:32.703428+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:33.703694+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:34.703949+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:35.704316+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:36.704644+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 22380544 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:37.704929+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:38.705162+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:39.705431+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:40.705724+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:41.706022+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:42.706359+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:43.706686+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:44.706928+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:45.707333+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:46.707622+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:47.707921+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:48.708281+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:49.708546+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:50.708848+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:51.709115+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:52.709476+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:53.709769+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:54.710019+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:55.710321+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:56.710641+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:57.710910+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 22372352 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:58.711142+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 22364160 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:59.711365+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 22364160 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:00.711606+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 22364160 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:01.711781+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 22364160 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:02.712051+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 22364160 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:03.712323+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 22355968 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:04.712578+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 22347776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:05.712926+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 22347776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:06.713172+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 22347776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:07.713475+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 22347776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:08.713762+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 22347776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:09.714023+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 22347776 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:10.714240+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:11.714525+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:12.714700+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:13.714819+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:14.714981+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:15.715150+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:16.715261+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:17.715444+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:18.715638+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:19.715816+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:20.716051+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:21.716262+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:22.716399+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:23.716613+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:24.716780+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:25.716958+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:26.717135+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:27.717306+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:28.717488+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:29.717688+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:30.717883+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:31.718088+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:32.718284+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:33.718475+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:34.718626+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:35.718784+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:36.718965+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:37.719128+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:38.719261+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 22339584 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:39.719431+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:40.719602+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:41.719832+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:42.720005+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:43.720155+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:44.720370+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:45.720534+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:46.720771+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:47.721126+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:48.721416+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:49.721656+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:50.721826+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:51.721983+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:52.722133+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:53.722370+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:54.722564+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:55.722838+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:56.723026+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:57.723295+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:58.723471+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:59.723679+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:00.723912+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:01.724322+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 22331392 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:02.724551+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 22323200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:03.724762+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 22323200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:04.725024+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 22323200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:05.725503+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 22323200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:06.725742+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 22323200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:07.726020+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 22323200 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:08.726247+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:09.726543+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:10.726795+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:11.727020+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:12.727302+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:13.727471+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:14.727660+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:15.728163+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:16.728394+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:17.728569+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:18.728736+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:19.728893+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:20.729072+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:21.729210+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:22.729358+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:23.729556+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:24.729770+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:25.730318+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:26.730508+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:27.730774+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:28.730964+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:29.731230+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:30.731423+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:31.731644+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:32.731834+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:33.732054+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 22315008 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:34.732277+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: no keepalive since 2025-12-01T21:01:04.732343+0000 (2106-02-07T06:28:15.999928+0000 seconds), reconnecting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _reopen_session rank -1
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _add_conns ranks=[0]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): picked mon.compute-0 con 0x563149cf0c00 addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): start opening mon connection
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): _renew_subs
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): get_auth_request con 0x563149cf0c00 auth_method 0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): _init_auth method 2
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): _init_auth already have auth, reseting
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): handle_auth_reply_more payload 9
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): handle_auth_reply_more payload_len 9
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient(hunting): handle_auth_done global_id 14197 payload 293
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _finish_hunting 0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: found mon.compute-0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _finish_auth 0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:34.733730+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_monmap mon_map magic: 0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient:  got monmap 1 from mon.compute-0 (according to old e1)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: dump:
                                           epoch 1
                                           fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
                                           last_changed 2025-12-01T20:31:09.927398+0000
                                           created 2025-12-01T20:31:09.927398+0000
                                           min_mon_release 20 (tentacle)
                                           election_strategy: 1
                                           0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_config config(9 keys)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: set_mon_vals no callback set
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc handle_mgr_map Got map version 9
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 22257664 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 22257664 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 22257664 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 22249472 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:39.233453+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 22249472 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:40.233689+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 22249472 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:41.233849+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 22249472 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:42.233993+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 22249472 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:43.234164+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:44.234405+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:45.234625+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:46.234926+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:47.235145+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:48.235317+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:49.235527+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:50.235706+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:51.235903+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 22241280 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:52.236122+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:53.236319+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:54.236519+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:55.236690+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:56.236861+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:57.237144+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:58.237468+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:59.237789+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:00.238039+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:01.238323+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:02.238501+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:03.238780+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:04.239106+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:05.239449+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:06.239773+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:07.240045+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:08.240283+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:09.240552+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:10.240844+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:11.241222+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:12.241463+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:13.241764+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:14.241992+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:15.242260+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 22233088 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:16.242533+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 22224896 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:17.243014+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 22224896 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:18.243290+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 22224896 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:19.243577+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 22224896 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:20.243835+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:21.244068+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:22.244304+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:23.244612+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:24.244769+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:25.244946+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:26.245161+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:27.245314+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:28.245451+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:29.245605+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:30.245814+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:31.246053+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:32.246336+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 22216704 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:33.246562+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:34.246834+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:35.247067+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:36.247377+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:37.247608+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:38.247824+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:39.248019+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:40.248307+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:41.248465+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:42.248741+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:43.248901+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:44.249104+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 01 21:03:47 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032610455' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:45.249375+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:46.249656+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:47.249907+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:48.250149+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:49.250448+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:50.250650+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:51.250930+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6522 writes, 26K keys, 6522 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6522 writes, 1409 syncs, 4.63 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2010 writes, 5464 keys, 2010 commit groups, 1.0 writes per commit group, ingest: 2.73 MB, 0.00 MB/s
                                           Interval WAL: 2010 writes, 906 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:52.251144+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:53.251347+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:54.251538+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:55.251765+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 22208512 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:56.252012+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 ms_handle_reset con 0x563146fb8800 session 0x563145e2e000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563148613800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 22364160 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc ms_handle_reset ms_handle_reset con 0x563148678400
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: get_auth_request con 0x563149cf0400 auth_method 0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: mgrc handle_mgr_configure stats_period=5
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:57.252150+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80601088 unmapped: 22175744 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 ms_handle_reset con 0x5631474f9c00 session 0x5631499e2e00
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149b2d000
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 ms_handle_reset con 0x5631474f9400 session 0x5631478cddc0
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: handle_auth_request added challenge on 0x563149af2800
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:58.252344+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 22306816 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:59.252524+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 22306816 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:00.252702+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 22306816 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:01.252880+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 22306816 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:02.253046+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 22306816 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:03.253260+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:04.253471+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:05.253886+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:06.254093+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:07.254272+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:08.254428+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:09.254612+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:10.254773+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:11.254930+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 22298624 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:12.255112+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:13.255293+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:14.255489+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:15.255678+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:16.255930+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:17.256087+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:18.256314+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:19.256496+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:20.256672+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:21.256833+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:22.256954+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:23.257100+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:24.257255+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:25.257361+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:26.257553+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:27.257781+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:28.258003+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:29.258240+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 22454272 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:30.258455+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:31.258587+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:32.258716+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:33.258874+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:34.259060+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:35.259243+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:36.259419+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:37.259559+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:38.259638+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:39.259857+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:40.259989+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:41.260141+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:42.260333+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:43.260573+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:44.260828+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:45.261030+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:46.261258+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:47.261427+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:48.261597+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:49.261760+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:50.261975+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:51.262152+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:52.262329+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:53.262505+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:54.262677+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:55.262883+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:56.263156+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:57.263477+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:58.263666+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:59.263815+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:00.263951+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:01.264240+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:02.264371+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:03.264605+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:04.264819+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:05.264989+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:06.265172+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:07.265350+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:08.265516+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:09.265731+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:10.265943+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:11.266160+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 22446080 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:12.266371+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:13.266575+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:47 compute-0 ceph-osd[87692]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:47 compute-0 ceph-osd[87692]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 831490 data_alloc: 218103808 data_used: 23277
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 22437888 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:14.266708+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config diff' '{prefix=config diff}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config show' '{prefix=config show}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 22290432 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:15.266839+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 22257664 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:16.266997+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0x10f701a/0x11b0000, compress 0x0/0x0/0x0, omap 0x18116, meta 0x2bb7eea), peers [0,2] op hist [])
Dec 01 21:03:47 compute-0 ceph-osd[87692]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 22257664 heap: 102776832 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: tick
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_tickets
Dec 01 21:03:47 compute-0 ceph-osd[87692]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:17.267154+0000)
Dec 01 21:03:47 compute-0 ceph-osd[87692]: do_command 'log dump' '{prefix=log dump}'
Dec 01 21:03:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 01 21:03:48 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143846718' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 01 21:03:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 01 21:03:48 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232574263' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 01 21:03:48 compute-0 rsyslogd[1006]: imjournal from <np0005541545:ceph-osd>: begin to drop messages due to rate-limiting
Dec 01 21:03:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:48 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3186377781' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 01 21:03:48 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1032610455' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 01 21:03:48 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/143846718' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 01 21:03:48 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/232574263' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 01 21:03:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 01 21:03:48 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171447713' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 01 21:03:48 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 01 21:03:48 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378869715' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 01 21:03:49 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501937113' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 01 21:03:49 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585201759' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 01 21:03:49 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3235823392' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1171447713' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2378869715' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mon[75880]: pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2501937113' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1585201759' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 01 21:03:49 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3235823392' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 01 21:03:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 01 21:03:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218598630' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 01 21:03:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 01 21:03:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544249712' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 01 21:03:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 01 21:03:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448285339' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 01 21:03:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 01 21:03:50 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507597656' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 01 21:03:50 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 01 21:03:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2600542521' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 21:03:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3218598630' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 01 21:03:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3544249712' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 01 21:03:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/448285339' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 01 21:03:51 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3507597656' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 01 21:03:51 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 01 21:03:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169047331' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 01 21:03:51 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 01 21:03:51 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606478315' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 01 21:03:51 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15118 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2600542521' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 01 21:03:52 compute-0 ceph-mon[75880]: pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1169047331' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 01 21:03:52 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3606478315' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 01 21:03:52 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15120 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:52 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15122 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000077
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000071
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b(unlocked)] enter Initial
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000088 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=0 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000034
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000057
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001387 2 0.000061
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.002947 2 0.000072
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001044 2 0.000044
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002071 2 0.000157
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 45 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:23.749764+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 131072 heap: 62824448 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 45 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008347 2 0.000112
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008336 2 0.000095
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.009605 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.010775 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008421 2 0.000132
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.012041 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008760 2 0.000093
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.010317 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.332122 14 0.000087
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.339955 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.393456 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.393487 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667674065s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 109.582473755s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] exit Reset 0.000070 1 0.000109
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] exit Start 0.000010 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.667636871s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.582473755s@ mbc={}] enter Started/Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 12.334982 14 0.000097
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 12.340821 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.393542 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.393725 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664979935s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 active pruub 109.580123901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] exit Reset 0.000064 1 0.000102
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] exit Start 0.000009 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=11.664947510s) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 109.580123901s@ mbc={}] enter Started/Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002648 4 0.000228
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000136 1 0.000077
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.004810 4 0.000192
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.004910 4 0.000267
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005039 5 0.000228
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.008375 2 0.000109
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.006281 2 0.000080
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 lc 33'1 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:24.749957+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.207840 1 0.000055
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.214120 2 0.000044
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.793719292s of 10.028434753s, submitted: 384
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.110695 1 0.000083
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.324787 1 0.000189
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.3( v 33'39 (0'0,33'39] local-lis/les=45/46 n=2 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 lc 33'20 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.081289 1 0.000105
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 46 pg[6.7( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/42 les/c/f=46/43/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1056768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004994 6 0.000132
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004685 6 0.000096
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:25.750107+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.186125 3 0.000063
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.186190 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000123 1 0.000132
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.245147 3 0.000076
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.245205 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000088 1 0.000093
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.086250 2 0.000375
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.086454 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.4( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=2 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.277412 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 DELETING pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.041949 2 0.000117
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.042086 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 47 pg[6.c( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=46) [1] r=-1 lpr=46 pi=[39,46)/1 pct=0'0 crt=33'39 lcod 0'0 active mbc={}] exit Started 1.292352 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1056768 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5(unlocked)] enter Initial
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000114 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000033
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000207 1 0.000049
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d(unlocked)] enter Initial
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000385 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=0 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000064
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000208 1 0.000113
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001304 2 0.000036
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000739 2 0.000124
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 48 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:26.750306+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 48 heartbeat osd_stat(store_statfs(0x4fe14a000/0x0/0x4ffc00000, data 0x33333/0x80000, compress 0x0/0x0/0x0, omap 0x96be, meta 0x1a26942), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 974848 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 356640 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.017369 2 0.000075
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.018427 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.018583 2 0.000097
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.020171 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002482 4 0.000214
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000069 1 0.000059
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000003 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 lc 33'10 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.006425 4 0.000228
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:27.750456+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067784 2 0.000028
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.063130 2 0.000060
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 lc 33'9 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126551 1 0.000053
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 49 pg[6.5( v 33'39 (0'0,33'39] local-lis/les=48/49 n=2 ec=39/23 lis/c=48/42 les/c/f=49/43/0 sis=48) [0] r=0 lpr=48 pi=[42,48)/1 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 942080 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 49 handle_osd_map epochs [49,50], i have 49, src has [1,50]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:28.750609+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:58.717228+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.1e scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:33:58.727821+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.1e scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 13)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:58.717228+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.1e scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:33:58.727821+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.1e scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 50 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x35e69/0x88000, compress 0x0/0x0/0x0, omap 0x9aed, meta 0x1a26513), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:29.750767+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:30.750942+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:31.751149+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 851968 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 375323 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 20.420049 33 0.000150
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 20.425826 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.479242 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.479273 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580068588s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 active pruub 117.580322266s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] exit Reset 0.000092 1 0.000170
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 52 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52 pruub=11.580017090s) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY pruub 117.580322266s@ mbc={}] enter Started/Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 52 handle_osd_map epochs [51,52], i have 52, src has [1,52]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:32.751296+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 786432 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:33.751427+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.192625 6 0.000097
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000168 1 0.000041
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 DELETING pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002214 2 0.000084
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002469 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 53 pg[6.8( v 33'39 (0'0,33'39] lb MIN local-lis/les=39/41 n=1 ec=39/23 lis/c=39/39 les/c/f=41/41/0 sis=52) [2] r=-1 lpr=52 pi=[39,52)/1 crt=33'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.195136 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:34.751608+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 53 heartbeat osd_stat(store_statfs(0x4fe137000/0x0/0x4ffc00000, data 0x39f15/0x91000, compress 0x0/0x0/0x0, omap 0x9d08, meta 0x1a262f8), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:35.751788+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 761856 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 53 heartbeat osd_stat(store_statfs(0x4fe137000/0x0/0x4ffc00000, data 0x39f15/0x91000, compress 0x0/0x0/0x0, omap 0x9d08, meta 0x1a262f8), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:36.752071+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 378241 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:37.752257+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 753664 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:38.752394+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 745472 heap: 63873024 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 53 handle_osd_map epochs [54,54], i have 53, src has [1,54]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.348392487s of 14.436017990s, submitted: 40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9(unlocked)] enter Initial
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=0 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000107 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=0 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000037
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000181 1 0.000045
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001868 2 0.000043
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 54 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:39.752573+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 54 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.477245 2 0.000070
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.479378 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=42/43 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002872 4 0.000204
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 55 pg[6.9( v 33'39 (0'0,33'39] local-lis/les=54/55 n=1 ec=39/23 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 712704 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:40.752771+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 55 heartbeat osd_stat(store_statfs(0x4fe131000/0x0/0x4ffc00000, data 0x3c9ab/0x97000, compress 0x0/0x0/0x0, omap 0x9e89, meta 0x1a26177), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a(unlocked)] enter Initial
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=0 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000032
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000193 1 0.000042
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000502 2 0.000065
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 56 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 671744 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:41.752960+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.921770 2 0.000035
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.922518 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 lc 0'0 (0'0,33'39] local-lis/les=43/44 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001848 3 0.000125
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 57 pg[6.a( v 33'39 (0'0,33'39] local-lis/les=56/57 n=1 ec=39/23 lis/c=56/43 les/c/f=57/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=33'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 630784 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 391685 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:42.753081+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 622592 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:43.753236+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 606208 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 57 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x3f441/0x9d000, compress 0x0/0x0/0x0, omap 0x9fbd, meta 0x1a26043), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 57 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 19.701046 36 0.000230
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 19.712445 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 20.722075 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 20.722117 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290416718s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 active pruub 129.917236328s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] exit Reset 0.000130 1 0.000203
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] exit Start 0.000016 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 58 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58 pruub=12.290340424s) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY pruub 129.917236328s@ mbc={}] enter Started/Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:44.753401+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:13.843435+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:13.854006+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 15)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:13.843435+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:13.854006+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.531475 7 0.000444
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.012176 2 0.000063
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.012218 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000106 1 0.000080
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 DELETING pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.009484 2 0.000205
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.009651 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 59 pg[6.b( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=58) [1] r=-1 lpr=58 pi=[45,58)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.553422 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:45.753648+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 581632 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:46.753826+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 540672 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 399587 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:47.754126+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:16.880504+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:16.891058+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 20.068285 33 0.000372
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 20.138829 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 21.157277 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 21.157351 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=48) [0] r=0 lpr=48 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863982201s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 active pruub 132.959640503s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] exit Reset 0.000115 1 0.000175
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] exit Start 0.000008 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 60 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=11.863910675s) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY pruub 132.959640503s@ mbc={}] enter Started/Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 60 heartbeat osd_stat(store_statfs(0x4fe125000/0x0/0x4ffc00000, data 0x42075/0xa3000, compress 0x0/0x0/0x0, omap 0x89c5, meta 0x1a2763b), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 17)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:16.880504+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:16.891058+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 557056 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.640263 6 0.000106
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.118565 3 0.000087
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.118639 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000085 1 0.000100
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 DELETING pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.017222 2 0.000296
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.017377 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 61 pg[6.d( v 33'39 (0'0,33'39] lb MIN local-lis/les=48/49 n=1 ec=39/23 lis/c=48/48 les/c/f=49/49/0 sis=60) [1] r=-1 lpr=60 pi=[48,60)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.776345 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:48.754464+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 499712 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:49.754695+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:18.914339+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.19 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:18.924859+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.19 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.497750282s of 10.585931778s, submitted: 36
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 19)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:18.914339+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.19 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:18.924859+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.19 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 442368 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:50.754967+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:19.918476+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:19.929167+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 21)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:19.918476+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:19.929167+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 417792 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:51.755241+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 409600 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 410126 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 62 heartbeat osd_stat(store_statfs(0x4fe11b000/0x0/0x4ffc00000, data 0x460db/0xab000, compress 0x0/0x0/0x0, omap 0x8b7c, meta 0x1a27484), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 62 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active+clean] exit Started/Primary/Active/Clean 27.693333 52 0.000336
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary/Active 27.912475 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started/Primary 28.922831 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] exit Started 28.922888 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=45) [0] r=0 lpr=45 crt=33'39 mlcod 33'39 active mbc={255={}}] enter Reset
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092677116s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 active pruub 137.919769287s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] exit Reset 0.000200 1 0.000321
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] enter Started
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] enter Start
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] state<Start>: transitioning to Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] exit Start 0.000055 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 63 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63 pruub=12.092543602s) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY pruub 137.919769287s@ mbc={}] enter Started/Stray
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:52.755380+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:21.934406+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.0 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:21.944819+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.0 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 23)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:21.934406+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.0 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:21.944819+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.0 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY mbc={}] exit Started/Stray 0.415573 6 0.000210
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 crt=33'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138296 3 0.000075
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ReplicaActive 0.138348 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000236 1 0.000102
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] enter Started/ToDelete/Deleting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 DELETING pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete/Deleting 0.025018 2 0.000230
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started/ToDelete 0.025358 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 pg_epoch: 64 pg[6.f( v 33'39 (0'0,33'39] lb MIN local-lis/les=45/46 n=1 ec=39/23 lis/c=45/45 les/c/f=46/46/0 sis=63) [2] r=-1 lpr=63 pi=[45,63)/1 pct=0'0 crt=33'39 active mbc={}] exit Started 0.579425 0 0.000000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 360448 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:53.755648+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:23.030935+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.c scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:23.100861+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.c scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 25)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:23.030935+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.c scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:23.100861+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.c scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 319488 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:54.755875+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:55.756262+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:25.053318+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:25.063861+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 27)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:25.053318+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:25.063861+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 303104 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:56.756546+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416133 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:57.756710+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:26.992688+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.16 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:27.003225+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.16 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 29)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:26.992688+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.16 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:27.003225+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.16 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:58.756944+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:28.001516+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.17 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:28.012082+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.17 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 31)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:28.001516+0000 osd.0 (osd.0) 30 : cluster [DBG] 4.17 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:28.012082+0000 osd.0 (osd.0) 31 : cluster [DBG] 4.17 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 286720 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:33:59.757337+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:00.757658+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 278528 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:01.757881+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418546 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:02.758093+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 262144 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:03.758358+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:04.758521+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:05.758708+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 253952 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:06.758930+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 245760 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418546 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:07.759115+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.146697998s of 18.192071915s, submitted: 21
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 237568 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:08.759326+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:38.110524+0000 osd.0 (osd.0) 32 : cluster [DBG] 7.1b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:38.121091+0000 osd.0 (osd.0) 33 : cluster [DBG] 7.1b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 33)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:38.110524+0000 osd.0 (osd.0) 32 : cluster [DBG] 7.1b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:38.121091+0000 osd.0 (osd.0) 33 : cluster [DBG] 7.1b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:09.759617+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:10.759850+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 221184 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:11.760012+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423372 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:12.760293+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:42.043953+0000 osd.0 (osd.0) 34 : cluster [DBG] 5.14 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:42.054491+0000 osd.0 (osd.0) 35 : cluster [DBG] 5.14 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 35)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:42.043953+0000 osd.0 (osd.0) 34 : cluster [DBG] 5.14 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:42.054491+0000 osd.0 (osd.0) 35 : cluster [DBG] 5.14 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 212992 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:13.760555+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 204800 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:14.760693+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:44.109010+0000 osd.0 (osd.0) 36 : cluster [DBG] 5.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:44.119596+0000 osd.0 (osd.0) 37 : cluster [DBG] 5.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 37)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:44.109010+0000 osd.0 (osd.0) 36 : cluster [DBG] 5.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:44.119596+0000 osd.0 (osd.0) 37 : cluster [DBG] 5.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:15.760888+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 196608 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:16.761029+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 147456 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 428198 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:17.761210+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:47.084756+0000 osd.0 (osd.0) 38 : cluster [DBG] 2.13 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:47.095286+0000 osd.0 (osd.0) 39 : cluster [DBG] 2.13 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 39)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:47.084756+0000 osd.0 (osd.0) 38 : cluster [DBG] 2.13 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:47.095286+0000 osd.0 (osd.0) 39 : cluster [DBG] 2.13 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64782336 unmapped: 139264 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:18.761478+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:48.073857+0000 osd.0 (osd.0) 40 : cluster [DBG] 2.11 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:48.084427+0000 osd.0 (osd.0) 41 : cluster [DBG] 2.11 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 41)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:48.073857+0000 osd.0 (osd.0) 40 : cluster [DBG] 2.11 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:48.084427+0000 osd.0 (osd.0) 41 : cluster [DBG] 2.11 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:19.761791+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.945369720s of 11.965550423s, submitted: 10
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:20.761998+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:50.076204+0000 osd.0 (osd.0) 42 : cluster [DBG] 3.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:50.086698+0000 osd.0 (osd.0) 43 : cluster [DBG] 3.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 43)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:50.076204+0000 osd.0 (osd.0) 42 : cluster [DBG] 3.15 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:50.086698+0000 osd.0 (osd.0) 43 : cluster [DBG] 3.15 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64790528 unmapped: 131072 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:21.762204+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433024 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:22.762344+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:23.762447+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64798720 unmapped: 122880 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:24.762590+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:54.088634+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.16 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:54.098996+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.16 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 45)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:54.088634+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.16 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:54.098996+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.16 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:25.762783+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 114688 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:26.762924+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435437 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:27.763109+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 106496 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:28.763299+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 98304 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:29.763492+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:59.074945+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:34:59.085482+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 47)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:59.074945+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:34:59.085482+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:30.763729+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 90112 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:31.763902+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437850 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:32.764090+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:33.764221+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 81920 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:34.764404+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.023485184s of 15.041648865s, submitted: 6
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:35.764589+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:05.117904+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.17 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:05.128478+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.17 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 49)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:05.117904+0000 osd.0 (osd.0) 48 : cluster [DBG] 3.17 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:05.128478+0000 osd.0 (osd.0) 49 : cluster [DBG] 3.17 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:36.764851+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 73728 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442676 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:37.764995+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:07.076115+0000 osd.0 (osd.0) 50 : cluster [DBG] 7.13 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:07.086687+0000 osd.0 (osd.0) 51 : cluster [DBG] 7.13 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 49152 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 51)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:07.076115+0000 osd.0 (osd.0) 50 : cluster [DBG] 7.13 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:07.086687+0000 osd.0 (osd.0) 51 : cluster [DBG] 7.13 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:38.765161+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:08.087875+0000 osd.0 (osd.0) 52 : cluster [DBG] 7.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:08.098574+0000 osd.0 (osd.0) 53 : cluster [DBG] 7.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 40960 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:39.765408+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 55 sent 53 num 4 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:09.051158+0000 osd.0 (osd.0) 54 : cluster [DBG] 7.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:09.061745+0000 osd.0 (osd.0) 55 : cluster [DBG] 7.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 53)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:08.087875+0000 osd.0 (osd.0) 52 : cluster [DBG] 7.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:08.098574+0000 osd.0 (osd.0) 53 : cluster [DBG] 7.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 24576 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:40.765756+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 57 sent 55 num 4 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:10.084395+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:10.094970+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 55)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:09.051158+0000 osd.0 (osd.0) 54 : cluster [DBG] 7.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:09.061745+0000 osd.0 (osd.0) 55 : cluster [DBG] 7.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 57)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:10.084395+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:10.094970+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 16384 heap: 64921600 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:41.765916+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 1048576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452320 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:42.766049+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:12.115437+0000 osd.0 (osd.0) 58 : cluster [DBG] 3.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:12.126011+0000 osd.0 (osd.0) 59 : cluster [DBG] 3.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 59)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:12.115437+0000 osd.0 (osd.0) 58 : cluster [DBG] 3.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:12.126011+0000 osd.0 (osd.0) 59 : cluster [DBG] 3.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 1048576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:43.766227+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 1048576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:44.766352+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 1040384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:45.766487+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 1040384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:46.766633+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.991403580s of 12.012083054s, submitted: 12
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1007616 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454731 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:47.766820+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:17.129949+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.2 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:17.140513+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.2 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 61)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:17.129949+0000 osd.0 (osd.0) 60 : cluster [DBG] 5.2 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:17.140513+0000 osd.0 (osd.0) 61 : cluster [DBG] 5.2 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1007616 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:48.767036+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1007616 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:49.767356+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 999424 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:50.767601+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 999424 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:51.767763+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:21.106688+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.5 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:21.117229+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.5 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 63)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:21.106688+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.5 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:21.117229+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.5 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 983040 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 457142 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:52.768062+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 974848 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:53.768250+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 958464 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:54.768398+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 950272 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:55.768555+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:25.091932+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:25.102551+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 65)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:25.091932+0000 osd.0 (osd.0) 64 : cluster [DBG] 2.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:25.102551+0000 osd.0 (osd.0) 65 : cluster [DBG] 2.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 950272 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:56.768730+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:26.096831+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:26.107407+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 67)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:26.096831+0000 osd.0 (osd.0) 66 : cluster [DBG] 7.6 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:26.107407+0000 osd.0 (osd.0) 67 : cluster [DBG] 7.6 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464375 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:57.769117+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:27.053389+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:27.063973+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 69)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:27.053389+0000 osd.0 (osd.0) 68 : cluster [DBG] 3.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:27.063973+0000 osd.0 (osd.0) 69 : cluster [DBG] 3.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 942080 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:58.769362+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 933888 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:34:59.769542+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 933888 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:00.769787+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 933888 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:01.769927+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 925696 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464375 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:02.770118+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 925696 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:03.770300+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.921869278s of 16.941867828s, submitted: 10
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 925696 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:04.770442+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:34.071853+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.7 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:34.082380+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.7 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 917504 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 71)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:34.071853+0000 osd.0 (osd.0) 70 : cluster [DBG] 5.7 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:34.082380+0000 osd.0 (osd.0) 71 : cluster [DBG] 5.7 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:05.770593+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 909312 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:06.770741+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:36.025690+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.2 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:36.036239+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.2 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 892928 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471608 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:07.770914+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 75 sent 73 num 4 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:37.017722+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.9 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:37.028305+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.9 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 73)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:36.025690+0000 osd.0 (osd.0) 72 : cluster [DBG] 2.2 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:36.036239+0000 osd.0 (osd.0) 73 : cluster [DBG] 2.2 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 884736 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:08.771072+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 4 last_log 77 sent 75 num 4 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:38.055242+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.c scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:38.065818+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.c scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 75)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:37.017722+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.9 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:37.028305+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.9 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 77)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:38.055242+0000 osd.0 (osd.0) 76 : cluster [DBG] 3.c scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:38.065818+0000 osd.0 (osd.0) 77 : cluster [DBG] 3.c scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 868352 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:09.772979+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 868352 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:10.773214+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 868352 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:11.773393+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:41.078483+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:41.089065+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 79)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:41.078483+0000 osd.0 (osd.0) 78 : cluster [DBG] 5.4 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:41.089065+0000 osd.0 (osd.0) 79 : cluster [DBG] 5.4 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 860160 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476430 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:12.773641+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 851968 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:13.773789+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 843776 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:14.773905+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 843776 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:15.774020+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 843776 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:16.774152+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476430 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:17.774237+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:18.774375+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 835584 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:19.774517+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 827392 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:20.774712+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 819200 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:21.774958+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.228290558s of 17.986967087s, submitted: 10
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 802816 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 478841 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:22.775098+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:52.058771+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:52.069262+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 802816 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 81)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:52.058771+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.1 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:52.069262+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.1 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:23.775307+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:53.102610+0000 osd.0 (osd.0) 82 : cluster [DBG] 7.18 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:53.113107+0000 osd.0 (osd.0) 83 : cluster [DBG] 7.18 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 778240 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 83)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:53.102610+0000 osd.0 (osd.0) 82 : cluster [DBG] 7.18 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:53.113107+0000 osd.0 (osd.0) 83 : cluster [DBG] 7.18 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:24.775588+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:54.057259+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:54.067833+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 778240 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 85)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:54.057259+0000 osd.0 (osd.0) 84 : cluster [DBG] 3.f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:54.067833+0000 osd.0 (osd.0) 85 : cluster [DBG] 3.f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:25.775851+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:55.033356+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.1b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:55.043949+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.1b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 778240 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 87)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:55.033356+0000 osd.0 (osd.0) 86 : cluster [DBG] 3.1b scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:55.043949+0000 osd.0 (osd.0) 87 : cluster [DBG] 3.1b scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:26.776123+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 770048 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486078 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:27.776303+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 745472 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:28.776463+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:58.034854+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.4 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:35:58.045567+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.4 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 737280 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 89)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:58.034854+0000 osd.0 (osd.0) 88 : cluster [DBG] 7.4 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:35:58.045567+0000 osd.0 (osd.0) 89 : cluster [DBG] 7.4 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:29.776713+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 737280 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:30.776884+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 729088 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:31.777097+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.010351181s of 10.056418419s, submitted: 10
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 704512 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 490902 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:32.777289+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:02.115277+0000 osd.0 (osd.0) 90 : cluster [DBG] 7.1f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:02.125843+0000 osd.0 (osd.0) 91 : cluster [DBG] 7.1f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 91)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:02.115277+0000 osd.0 (osd.0) 90 : cluster [DBG] 7.1f scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:02.125843+0000 osd.0 (osd.0) 91 : cluster [DBG] 7.1f scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 704512 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:33.777582+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:03.070397+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.18 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:03.080956+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.18 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 93)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:03.070397+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.18 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:03.080956+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.18 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 688128 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:34.777886+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:04.117364+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:04.127391+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 95)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:04.117364+0000 osd.0 (osd.0) 94 : cluster [DBG] 2.19 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:04.127391+0000 osd.0 (osd.0) 95 : cluster [DBG] 2.19 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 688128 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:35.778097+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:05.129451+0000 osd.0 (osd.0) 96 : cluster [DBG] 5.1e scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:05.139989+0000 osd.0 (osd.0) 97 : cluster [DBG] 5.1e scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 97)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:05.129451+0000 osd.0 (osd.0) 96 : cluster [DBG] 5.1e scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:05.139989+0000 osd.0 (osd.0) 97 : cluster [DBG] 5.1e scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 688128 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:36.778312+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 679936 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 498141 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:37.778489+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 679936 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:38.778723+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 671744 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:39.778886+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 671744 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:40.779126+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 671744 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:41.779311+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.016089439s of 10.036548615s, submitted: 8
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 655360 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 500552 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:42.779475+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:12.151863+0000 osd.0 (osd.0) 98 : cluster [DBG] 6.0 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:12.176572+0000 osd.0 (osd.0) 99 : cluster [DBG] 6.0 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 99)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:12.151863+0000 osd.0 (osd.0) 98 : cluster [DBG] 6.0 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:12.176572+0000 osd.0 (osd.0) 99 : cluster [DBG] 6.0 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 655360 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:43.779662+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 630784 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:44.779829+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:14.195575+0000 osd.0 (osd.0) 100 : cluster [DBG] 6.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:14.213233+0000 osd.0 (osd.0) 101 : cluster [DBG] 6.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 101)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:14.195575+0000 osd.0 (osd.0) 100 : cluster [DBG] 6.3 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:14.213233+0000 osd.0 (osd.0) 101 : cluster [DBG] 6.3 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 630784 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:45.780416+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 622592 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:46.780555+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 622592 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502963 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:47.780744+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 614400 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:48.780906+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 606208 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:49.781108+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 606208 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:50.781332+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 606208 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:51.781464+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 598016 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502963 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:52.781621+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 581632 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:53.781785+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.150177956s of 12.157385826s, submitted: 4
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 557056 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:54.781953+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:24.309241+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.7 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:24.323531+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.7 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 103)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:24.309241+0000 osd.0 (osd.0) 102 : cluster [DBG] 6.7 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:24.323531+0000 osd.0 (osd.0) 103 : cluster [DBG] 6.7 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65421312 unmapped: 548864 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:55.782162+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:25.300302+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.9 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:25.310958+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.9 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 105)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:25.300302+0000 osd.0 (osd.0) 104 : cluster [DBG] 6.9 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:25.310958+0000 osd.0 (osd.0) 105 : cluster [DBG] 6.9 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 540672 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:56.782491+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:26.274103+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.a scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:26.284647+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.a scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 107)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:26.274103+0000 osd.0 (osd.0) 106 : cluster [DBG] 6.a scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:26.284647+0000 osd.0 (osd.0) 107 : cluster [DBG] 6.a scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65429504 unmapped: 540672 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:57.782774+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:27.285783+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.5 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  will send 2025-12-01T20:36:27.303483+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.5 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client handle_log_ack log(last 109)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:27.285783+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.5 scrub starts
Dec 01 21:03:52 compute-0 ceph-osd[86634]: log_client  logged 2025-12-01T20:36:27.303483+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.5 scrub ok
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65437696 unmapped: 532480 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:58.783025+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 524288 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:35:59.783205+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 524288 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:00.783422+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 516096 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:01.783624+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 507904 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:02.783833+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 499712 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:03.783983+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 491520 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:04.784353+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 491520 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:05.784628+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 483328 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:06.786018+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 483328 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:07.786266+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 475136 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:08.786423+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 475136 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:09.786572+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65495040 unmapped: 475136 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:10.786797+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 466944 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:11.786998+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 466944 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:12.787264+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 466944 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:13.787473+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 458752 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:14.787619+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65511424 unmapped: 458752 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:15.787819+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 450560 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:16.787971+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 450560 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:17.788161+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 442368 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:18.788365+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 434176 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:19.788559+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65536000 unmapped: 434176 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:20.788999+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 425984 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:21.789137+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65544192 unmapped: 425984 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:22.789280+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 417792 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:23.789452+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 401408 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:24.789692+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 401408 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:25.789856+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 393216 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:26.790025+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 393216 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:27.790243+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 393216 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:28.790412+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 376832 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:29.790604+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 376832 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:30.790846+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 368640 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:31.790986+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 368640 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:32.791119+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65601536 unmapped: 368640 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:33.791319+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 352256 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:34.791471+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 352256 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:35.791675+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:36.791878+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:37.792058+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 352256 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:38.792423+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:39.792571+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65626112 unmapped: 344064 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:40.792778+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 335872 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:41.792926+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 335872 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:42.793073+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 327680 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:43.793237+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 327680 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:44.793428+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 327680 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:45.793552+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 319488 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:46.793732+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 319488 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:47.793852+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 311296 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:48.793954+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 311296 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:49.794100+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 311296 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:50.794289+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 303104 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:51.794452+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 303104 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:52.794566+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 294912 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:53.794704+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 294912 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:54.794870+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:55.795007+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 286720 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:56.795167+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 278528 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:57.795303+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 278528 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:58.795435+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65708032 unmapped: 262144 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:36:59.795558+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 253952 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:00.795723+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 253952 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:01.795835+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 245760 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:02.795958+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 245760 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:03.796053+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 237568 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:04.796216+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 229376 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:05.796352+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 229376 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:06.796491+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 229376 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:07.796619+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 221184 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:08.796740+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 221184 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:09.796861+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 212992 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:10.797030+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 212992 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:11.797159+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 204800 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:12.797324+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 204800 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:13.797458+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 204800 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:14.797632+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 196608 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:15.797888+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 188416 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:16.798049+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 180224 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:17.798262+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 180224 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:18.798422+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 155648 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:19.798635+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 147456 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:20.799407+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 147456 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:21.801529+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 139264 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:22.803175+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 139264 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:23.803927+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 139264 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:24.805125+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 131072 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:25.805305+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 131072 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:26.805450+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 122880 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:27.805866+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 122880 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:28.806437+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 122880 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:29.806599+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 114688 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:30.807448+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 114688 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:31.807607+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 106496 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:32.808409+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 106496 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:33.808853+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 106496 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:34.808989+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 98304 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:35.809364+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 98304 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:36.809612+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 90112 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:37.810231+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 90112 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:38.810408+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 65536 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:39.810626+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 65536 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:40.810834+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 65536 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:41.811008+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 57344 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:42.811212+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 57344 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:43.811465+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 57344 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:44.811667+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 49152 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:45.811804+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 49152 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:46.811956+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 40960 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:47.812091+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 40960 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:48.812362+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 32768 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:49.812578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 32768 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:50.812740+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 32768 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:51.812945+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 24576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:52.813126+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 24576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:53.813273+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 24576 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:54.813430+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 16384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:55.813605+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 16384 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:56.813755+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 8192 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:57.813906+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 8192 heap: 65970176 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:58.814036+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1032192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:37:59.814224+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1032192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:00.814417+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1032192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:01.814619+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1024000 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:02.814737+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1024000 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:03.814890+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1015808 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:04.815079+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1007616 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:05.815252+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1007616 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:06.815427+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 999424 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:07.815578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 999424 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:08.815739+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 999424 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:09.815902+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 991232 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:10.816104+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 991232 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:11.816257+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 991232 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:12.816406+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 983040 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:13.816584+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 983040 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:14.816720+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 974848 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:15.816830+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 974848 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:16.816990+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 966656 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:17.817137+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 966656 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:18.817274+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 966656 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:19.817428+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 958464 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:20.817645+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 958464 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:21.817794+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:22.817929+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:23.818101+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 958464 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:24.818253+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:25.818390+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:26.819069+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 950272 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:27.819223+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 942080 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:28.819352+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 942080 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:29.819481+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 942080 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:30.819658+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 933888 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:31.819841+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 933888 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:32.820119+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 925696 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:33.820253+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 925696 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:34.820378+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 917504 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:35.820572+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 917504 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:36.820764+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 917504 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:37.820940+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 909312 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:38.821125+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 909312 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:39.821243+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 901120 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:40.821402+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 901120 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:41.821527+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 901120 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:42.821670+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 892928 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:43.821825+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 892928 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:44.822007+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 892928 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:45.822128+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 884736 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:46.822265+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 884736 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:47.822403+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 876544 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:48.822744+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 876544 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:49.822875+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 876544 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:50.823053+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 868352 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:51.823231+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 868352 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:52.823375+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 868352 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:53.823525+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 860160 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:54.823657+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 860160 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:55.823812+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 851968 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:56.824017+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 851968 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:57.824171+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:58.824415+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:38:59.824582+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:00.824815+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 843776 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:01.824954+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 835584 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:02.825137+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 835584 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:03.825287+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 827392 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:04.825471+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 827392 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:05.825625+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 819200 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:06.826053+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 819200 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:07.826360+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 819200 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:08.826553+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 811008 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:09.826748+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 811008 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:10.826942+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 802816 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:11.827129+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 802816 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:12.827266+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 802816 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:13.827393+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 794624 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:14.827594+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 794624 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:15.827788+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 794624 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:16.827908+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 786432 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:17.828077+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 786432 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:18.828244+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 770048 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:19.828430+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 770048 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:20.828605+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 761856 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:21.828734+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 761856 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:22.828887+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 761856 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:23.829053+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 753664 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:24.829246+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 753664 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:25.829367+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 753664 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:26.829527+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 745472 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:27.829675+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 745472 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:28.829850+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 745472 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:29.830009+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 737280 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:30.830177+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 737280 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:31.830787+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 729088 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:32.831069+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 729088 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:33.831268+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 729088 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:34.831395+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 720896 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:35.831585+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 720896 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:36.831913+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 712704 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:37.832079+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 712704 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:38.832636+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 704512 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:39.832750+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 704512 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:40.833028+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 704512 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:41.833262+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 696320 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:42.833439+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 696320 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:43.833681+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 696320 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:44.833957+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 688128 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:45.834124+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 688128 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:46.834353+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 679936 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:47.834517+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 679936 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:48.834659+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 679936 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:49.834802+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 671744 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:50.834988+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 671744 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:51.835256+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 671744 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:52.835385+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 663552 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:53.835588+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 663552 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:54.835721+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 655360 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:55.835850+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 655360 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:56.836094+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 647168 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:57.836249+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 647168 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:58.836388+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 647168 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:39:59.836557+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 638976 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:00.836747+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 638976 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:01.836915+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 630784 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:02.837056+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 630784 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:03.837168+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 630784 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:04.837298+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 622592 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:05.837417+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 622592 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:06.837615+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 622592 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:07.837826+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 614400 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:08.837969+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 614400 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:09.838085+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 614400 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:10.838517+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 606208 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:11.838614+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 606208 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:12.838769+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 598016 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:13.838935+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 598016 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:14.839080+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 598016 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:15.839242+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 589824 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:16.839394+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 589824 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:17.839533+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 589824 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:18.839671+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 581632 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:19.839826+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 581632 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:20.840001+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 573440 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:21.840330+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 573440 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:22.840452+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 573440 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:23.840759+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 565248 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:24.840966+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 565248 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:25.841137+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 557056 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:26.841273+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 557056 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:27.841480+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 557056 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:28.841662+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 548864 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:29.841831+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 548864 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:30.842001+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 548864 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:31.842123+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:32.842270+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 540672 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:33.842408+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:34.842720+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:35.843040+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 532480 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:36.843728+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 524288 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:37.844111+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 524288 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:38.844279+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 516096 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:39.844508+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 516096 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:40.844694+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 516096 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:41.844899+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 507904 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:42.845082+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 507904 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:43.845440+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:44.845743+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:45.846103+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:46.846400+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:47.846671+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:48.846902+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:49.847138+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 483328 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:50.847393+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 483328 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:51.847575+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:52.847788+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:53.847922+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:54.848065+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:55.848233+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:56.848373+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 458752 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:57.848605+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:58.848742+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 499712 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:40:59.848865+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:00.849045+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 491520 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:01.849280+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 483328 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:02.849397+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:03.849618+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 475136 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:04.849746+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:05.849868+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:06.850394+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 466944 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:07.850578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 458752 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:08.850715+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 458752 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:09.850853+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 450560 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:10.851013+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 450560 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:11.851199+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 442368 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:12.851349+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 442368 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:13.851475+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 442368 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:14.851609+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 434176 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:15.851779+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 434176 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:16.852366+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 434176 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:17.852490+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 425984 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:18.852628+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 425984 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:19.852740+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 417792 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:20.852938+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 417792 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:21.853092+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 417792 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:22.853266+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 409600 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:23.853399+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 409600 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:24.853575+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 409600 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:25.853719+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 401408 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:26.853950+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 401408 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:27.854107+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 393216 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:28.854261+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 393216 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:29.854443+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 385024 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:30.854699+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 385024 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:31.854899+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 385024 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:32.855035+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 376832 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:33.855261+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 376832 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:34.855389+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 376832 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:35.855514+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 368640 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:36.855747+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 368640 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:37.856027+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 360448 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:38.856228+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 360448 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:39.856370+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 360448 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:40.856643+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 352256 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:41.856792+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 352256 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:42.856940+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 352256 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:43.858284+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 344064 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:44.858483+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 344064 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:45.859721+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 16.42 MB, 0.03 MB/s
                                           Interval WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 270336 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:46.859887+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 270336 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:47.860261+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:48.860408+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 270336 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:49.860516+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 262144 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:50.860754+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 262144 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:51.860913+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 262144 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:52.861173+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 253952 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:53.861417+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 253952 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:54.861756+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 245760 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:55.861930+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 245760 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:56.862063+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 237568 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:57.862233+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 229376 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:58.862505+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 229376 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:41:59.862667+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 221184 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:00.862970+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 221184 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:01.863234+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 221184 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:02.863498+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 212992 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:03.863719+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 212992 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:04.863970+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 212992 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:05.864092+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 204800 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:06.864350+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 204800 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:07.864557+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 204800 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:08.864744+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 196608 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:09.864903+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 196608 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:10.865129+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 188416 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:11.865267+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 188416 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:12.865439+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 188416 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:13.865612+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 180224 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:14.865833+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 180224 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:15.866039+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 172032 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:16.866191+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 172032 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:17.866345+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 163840 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:18.866528+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 163840 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:19.866704+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 163840 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:20.866920+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 155648 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:21.867043+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 155648 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:22.867248+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 155648 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:23.867417+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 147456 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:24.867550+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 147456 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:25.867766+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 139264 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:26.867930+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 139264 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:27.868081+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 139264 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:28.868423+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 131072 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:29.868587+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 131072 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:30.868951+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 122880 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:31.869129+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 122880 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:32.869379+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 122880 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:33.869578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 114688 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:34.869743+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 114688 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:35.869916+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 114688 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:36.870078+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 106496 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:37.870232+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 106496 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:38.870556+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 106496 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:39.870731+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 98304 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:40.870986+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 98304 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:41.871119+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 90112 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:42.871426+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 90112 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:43.871633+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 90112 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:44.871768+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 81920 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:45.871914+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 81920 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:46.872045+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 73728 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:47.872172+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 73728 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:48.872309+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 73728 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:49.872430+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 65536 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:50.872583+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 65536 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:51.872724+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 57344 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:52.872879+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 57344 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:53.873001+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 57344 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:54.873134+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 49152 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:55.873235+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 49152 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:56.873360+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 49152 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:57.873497+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 40960 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:58.873624+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 40960 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:42:59.873766+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 40960 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:00.873921+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 32768 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:01.874061+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 32768 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:02.874234+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 24576 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:03.874369+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 24576 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:04.874492+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 16384 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:05.874628+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 16384 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:06.874756+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 16384 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:07.874883+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 8192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:08.875008+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 8192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:09.875137+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 8192 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:10.875328+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 0 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:11.875453+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 0 heap: 67018752 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:12.875573+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1040384 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:13.875719+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1040384 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:14.875875+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1040384 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:15.875954+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1032192 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:16.876086+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1032192 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:17.876206+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1024000 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:18.876319+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1024000 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:19.876431+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1015808 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:20.878343+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1015808 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:21.878452+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1015808 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:22.878616+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1007616 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:23.878772+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1007616 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:24.878907+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 999424 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:25.881451+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 999424 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:26.881569+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 999424 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:27.881693+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 991232 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:28.881838+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 991232 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:29.881972+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 991232 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:30.882130+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 983040 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:31.882248+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 983040 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:32.882389+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 974848 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:33.882528+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 974848 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:34.882662+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 974848 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:35.882798+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 966656 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:36.882939+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 966656 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:37.883060+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 966656 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:38.883192+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 958464 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:39.883314+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 958464 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:40.883468+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 950272 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:41.883575+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 950272 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:42.883709+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 942080 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:43.883904+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 942080 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:44.884099+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 942080 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:45.884278+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:46.885238+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:47.885366+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:48.885771+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:49.885980+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:50.886153+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:51.886270+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:52.886406+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:53.886571+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:54.886781+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:55.886918+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:56.887055+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:57.887219+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:58.887381+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:43:59.887544+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:00.888088+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:01.888262+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:02.888404+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:03.888563+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:04.888790+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:05.889204+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:06.889362+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:07.889973+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:08.890122+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:09.890582+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:10.890771+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:11.890904+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:12.891156+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:13.891230+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:14.891363+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:15.891509+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:16.891658+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:17.891804+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:18.891988+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:19.892161+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:20.892353+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:21.892520+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:22.892681+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:23.892811+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:24.892952+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:25.893105+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:26.893250+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:27.893359+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:28.893467+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:29.893610+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:30.893754+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:31.893916+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:32.894043+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:33.894245+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:34.894380+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:35.894511+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:36.894715+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:37.894850+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:38.894990+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:39.895127+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:40.895356+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:41.895512+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:42.895686+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:43.895823+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:44.896017+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:45.896156+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:46.896278+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:47.896456+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:48.896578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:49.896702+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:50.896860+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:51.897042+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:52.897156+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:53.897241+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:54.897390+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:55.897505+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:56.897591+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:57.897730+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:58.897846+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:44:59.897963+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:00.898096+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:01.898252+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:02.898411+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:03.898538+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:04.898666+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:05.898805+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:06.898949+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:07.899080+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:08.899197+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:09.899337+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:10.899483+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:11.899637+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:12.899767+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:13.899931+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:14.900049+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:15.900202+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:16.900324+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:17.900455+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:18.900598+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:19.900755+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:20.900911+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:21.901062+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:22.901262+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:23.901435+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:24.901613+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:25.901764+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:27.856360+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:28.856522+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:29.856748+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:30.856894+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:31.857059+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:32.857215+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:33.857347+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:34.857467+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:35.857647+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:36.857846+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:37.857958+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:38.858098+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:39.858246+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:40.858368+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:41.858535+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:42.858696+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:43.858801+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:44.858886+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:45.858958+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:46.859113+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:47.859251+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:48.859465+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:49.859597+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:50.859737+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:51.859965+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:52.860293+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:53.860452+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:54.860570+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:55.860705+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:56.860893+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:57.861071+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 933888 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:58.862036+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:45:59.862232+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:00.862484+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:01.862693+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:02.862859+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:03.863003+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:04.863165+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:05.863348+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:06.863473+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:07.863604+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:08.863716+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:09.863859+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:10.863982+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:11.864144+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:12.864252+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:13.864607+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:14.864874+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:15.865078+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:16.865262+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:17.865456+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 925696 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:18.865613+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:19.865746+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:20.865858+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:21.866122+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:22.866270+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:23.866413+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:24.866634+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:25.866784+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:26.866931+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:27.867053+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:28.867206+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:29.867335+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:30.867460+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:31.867982+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:32.868146+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:33.868291+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:34.869143+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:35.869425+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:36.869556+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:37.869669+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:38.869794+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:39.870096+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:40.870405+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:41.870614+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:42.870768+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:43.870920+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:44.871122+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:45.871241+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:46.871392+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:47.871553+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:48.871701+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:49.871868+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:50.872001+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:51.872252+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:52.872416+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 917504 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc ms_handle_reset ms_handle_reset con 0x5569201c0000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: get_auth_request con 0x55691f5c6800 auth_method 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc handle_mgr_configure stats_period=5
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:53.872544+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:54.872781+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:55.872919+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:56.873055+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:57.873248+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 ms_handle_reset con 0x55691f5c7000 session 0x55691fe961c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f5c6400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:58.873420+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 647168 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:46:59.873601+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:00.873740+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:01.873950+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:02.874076+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:03.874217+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:04.874584+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:05.874753+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:06.874875+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:07.874989+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:08.875267+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:09.915212+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:10.915390+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:11.915654+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:12.915831+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:13.915958+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:14.916132+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:15.916323+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:16.916519+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:17.916676+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:18.916898+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:19.917112+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:20.917260+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:21.917422+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:22.917547+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:23.917675+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:24.917845+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:25.918066+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:26.918287+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:27.918438+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:28.918574+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:29.918693+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:30.918828+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:31.919019+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:32.919212+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:33.919356+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:34.919565+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:35.919683+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:36.919817+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:37.920015+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:38.920284+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:39.920448+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:40.920591+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:41.920727+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:42.920867+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:43.921016+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:44.921160+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:45.921322+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:46.921460+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:47.921628+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:48.921782+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:49.921992+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:50.922154+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:51.922369+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:52.922501+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:53.922619+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:54.922824+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:55.923030+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:56.923152+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:57.923238+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:58.923367+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:47:59.923550+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:00.923718+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:01.923856+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 786432 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:02.923969+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:03.924077+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:04.924274+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:05.924388+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:06.924574+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:07.924709+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:08.924888+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:09.925145+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:10.925254+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:11.925391+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:12.925520+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:13.925692+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:14.925821+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:15.925968+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:16.926106+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:17.926302+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:18.926500+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:19.926645+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:20.926833+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:21.927755+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:22.927900+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:23.928029+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:24.928149+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:25.928271+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:26.928520+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:27.928678+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:28.928819+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:29.929010+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:30.929162+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:31.929424+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:32.929606+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:33.929794+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:34.929978+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:35.930138+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:36.930330+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:37.930483+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:38.930643+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:39.930777+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:40.930955+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:41.931288+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:42.931517+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:43.931730+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:44.931909+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:45.932059+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:46.932266+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:47.932398+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:48.932571+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:49.932749+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:50.932921+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:51.933128+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:52.933277+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:53.933422+0000)
Dec 01 21:03:52 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15124 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:54.933552+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:55.933713+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:56.933922+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:57.934123+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:58.934287+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:48:59.934472+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:00.934667+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:01.934880+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:02.935094+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:03.935294+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:04.935470+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:05.935665+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:06.935812+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:07.935964+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:08.936113+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:09.936498+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:10.936814+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:11.937007+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:12.937939+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:13.938378+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:14.938530+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:15.939325+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:16.939548+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:17.939709+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:18.939836+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:19.940266+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:20.940404+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:21.940543+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:22.940659+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:23.940817+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:24.940983+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:25.941150+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:26.941369+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:27.941578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:28.941808+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 778240 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:29.941963+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:30.942174+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:31.942497+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:32.942680+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:33.942827+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:34.943039+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:35.943265+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:36.943488+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:37.943696+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:38.944011+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:39.944300+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:40.944591+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:41.944880+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:42.944986+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:43.945175+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:44.945317+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:45.945462+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:46.945599+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:47.945673+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:48.945823+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:49.945942+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:50.946060+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:51.946229+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:52.946349+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:53.946491+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:54.946639+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:55.946800+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:56.946972+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:57.956998+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:58.957163+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:49:59.957359+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:00.957480+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:01.957639+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:02.957794+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:03.957915+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:04.958111+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:05.958299+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:06.958464+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:07.958597+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:08.958728+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:09.958846+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:10.959027+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:11.959244+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:12.959397+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:13.959567+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:14.959724+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:15.959893+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:16.960054+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:17.960385+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:18.960556+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:19.960758+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:20.961071+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:21.961333+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:22.961481+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:23.961611+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:24.961831+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:25.961960+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:26.962135+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:27.962228+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:28.962373+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:29.962560+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:30.962753+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:31.962988+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:32.963235+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:33.963477+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:34.963651+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:35.963788+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:36.963929+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:37.964093+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:38.964261+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:39.964381+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:40.964522+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:41.964695+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:42.964813+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:43.964958+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:44.965262+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:45.965496+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:46.965722+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:47.965893+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:48.966107+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:49.966342+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:50.966692+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:51.966876+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:52.967044+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:53.967167+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:54.967298+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:55.967483+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:56.967717+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:57.967956+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:58.968134+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:50:59.968447+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:00.968591+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:01.968733+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:02.968861+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:03.969079+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:04.969250+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:05.969368+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:06.969485+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:07.969612+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:08.969733+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:09.969866+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:10.969996+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:11.970160+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:12.970390+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:13.970505+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:14.970633+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:15.970764+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:16.970876+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:17.971306+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:18.971446+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:19.971568+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:20.971705+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:21.971889+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:22.972078+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:23.972367+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread fragmentation_score=0.000122 took=0.000013s
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:24.972482+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:25.972601+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:26.972730+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:27.972881+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:28.973037+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:29.973228+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:30.973412+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:31.973649+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:32.973775+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:33.973883+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:34.974005+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:35.974137+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:36.974267+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:37.974501+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:38.974664+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:39.974903+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:40.975085+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:41.975306+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:42.975432+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:43.975540+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:44.975722+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:45.975880+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 770048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4357 writes, 20K keys, 4357 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4357 writes, 454 syncs, 9.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e509a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55691e5098d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:46.976033+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:47.976261+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:48.976399+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:49.976592+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:50.976759+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:51.976974+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:52.977203+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:53.977511+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:54.977756+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:55.977939+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:56.978108+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:57.978283+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:58.978479+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:51:59.978653+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:00.978817+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 512607 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:01.978997+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 737280 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:02.979115+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 729088 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:03.979224+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 729088 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:04.979428+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 729088 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 971.637207031s of 971.653137207s, submitted: 8
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:05.979594+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 516099 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 548864 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe11d000/0x0/0x4ffc00000, data 0x4890b/0xaf000, compress 0x0/0x0/0x0, omap 0x8cba, meta 0x1a27346), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:06.980107+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 548864 heap: 72720384 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:07.980232+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 9805824 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:08.980398+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 9805824 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:09.980521+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 67 ms_handle_reset con 0x55691f315400 session 0x55692001b880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 9789440 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:10.980686+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 547035 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 9789440 heap: 77381632 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:11.980834+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 17915904 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 67 heartbeat osd_stat(store_statfs(0x4fdca1000/0x0/0x4ffc00000, data 0x4bcb05/0x529000, compress 0x0/0x0/0x0, omap 0x903e, meta 0x1a26fc2), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:12.981049+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 ms_handle_reset con 0x556922501000 session 0x55691ffdca80
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:13.981247+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:14.981453+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:15.981686+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:16.982609+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:17.982763+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:18.982893+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:19.983070+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:20.983274+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:21.983494+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:22.983754+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:23.983879+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:24.984021+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:25.984167+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:26.984673+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:27.984816+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:28.985026+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:29.985226+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:30.985382+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:31.985660+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:32.985802+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:33.985973+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:34.986247+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:35.986390+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:36.986543+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:37.986687+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:38.986827+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:39.986955+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:40.987147+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:41.987531+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:42.987701+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:43.987890+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:44.988017+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 17842176 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02d000/0x0/0x4ffc00000, data 0x112e12d/0x119d000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:45.988230+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 618208 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feab800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.277194977s of 40.485038757s, submitted: 37
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 17711104 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 ms_handle_reset con 0x55691feab800 session 0x556922036380
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:46.988428+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd02e000/0x0/0x4ffc00000, data 0x112e52d/0x119e000, compress 0x0/0x0/0x0, omap 0x9225, meta 0x1a26ddb), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 68 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 17719296 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:47.988662+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 17719296 heap: 85778432 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:48.989329+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 17170432 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:49.989472+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 25313280 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feaa800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:50.989599+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 70 ms_handle_reset con 0x55691feabc00 session 0x5569220eb6c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 70 ms_handle_reset con 0x55691feaa800 session 0x556920825880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878771 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 25419776 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:51.989773+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 71 ms_handle_reset con 0x55691f315400 session 0x5569220ea1c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feab800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 25403392 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:52.989952+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 71 ms_handle_reset con 0x556921a32c00 session 0x55691ffdd6c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 72 ms_handle_reset con 0x55691feab800 session 0x556922069a40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fc81d000/0x0/0x4ffc00000, data 0x1133d09/0x11aa000, compress 0x0/0x0/0x0, omap 0x8cd0, meta 0x1a27330), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55692243e000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 72 ms_handle_reset con 0x55692243e000 session 0x55692196fa40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 25264128 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:53.990148+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a38c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 25133056 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:54.990340+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 73 ms_handle_reset con 0x556921a38c00 session 0x55692196f340
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 25542656 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:55.990819+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 74 ms_handle_reset con 0x55691f315400 session 0x55691fc48000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 647567 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 25542656 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 74 heartbeat osd_stat(store_statfs(0x4fd019000/0x0/0x4ffc00000, data 0x113655b/0x11af000, compress 0x0/0x0/0x0, omap 0x883e, meta 0x1a277c2), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:56.991108+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050306320s of 10.585802078s, submitted: 110
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 75 ms_handle_reset con 0x556921a32c00 session 0x55691ffdc000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 25542656 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:57.991763+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a38c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 25534464 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:58.992400+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 76 ms_handle_reset con 0x556921a38c00 session 0x55691fea4700
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 25534464 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55692243e000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:52:59.992568+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 77 ms_handle_reset con 0x55692243e000 session 0x55691fc49340
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 25354240 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:00.992889+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663303 data_alloc: 218103808 data_used: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 25354240 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:01.993042+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd00d000/0x0/0x4ffc00000, data 0x113a791/0x11ba000, compress 0x0/0x0/0x0, omap 0x8e4d, meta 0x1a271b3), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 25354240 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:02.993238+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 78 ms_handle_reset con 0x556923da7c00 session 0x5569220696c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69001216 unmapped: 25174016 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:03.993395+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 25157632 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:04.993514+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fd00d000/0x0/0x4ffc00000, data 0x113bc5d/0x11bd000, compress 0x0/0x0/0x0, omap 0x8ebe, meta 0x1a27142), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x556923da7c00 session 0x556922037dc0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x556921a32c00 session 0x55691ff448c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x55691f315400 session 0x556922036c40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 80 ms_handle_reset con 0x556923da7800 session 0x5569220361c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 25100288 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:05.993722+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679404 data_alloc: 218103808 data_used: 51
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 25100288 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 81 heartbeat osd_stat(store_statfs(0x4fcffd000/0x0/0x4ffc00000, data 0x1140052/0x11c9000, compress 0x0/0x0/0x0, omap 0x9003, meta 0x1a26ffd), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:06.993873+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.062306404s of 10.219599724s, submitted: 87
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 82 ms_handle_reset con 0x556923da7400 session 0x55692196fa40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 24027136 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:07.993994+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 83 ms_handle_reset con 0x556923da7c00 session 0x55692196f880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:08.994214+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 83 ms_handle_reset con 0x556923da7800 session 0x5569220b1880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:09.994474+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fcff7000/0x0/0x4ffc00000, data 0x1142cab/0x11d1000, compress 0x0/0x0/0x0, omap 0x8901, meta 0x1a276ff), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:10.994703+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 85 ms_handle_reset con 0x556923da7400 session 0x5569220b1dc0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695985 data_alloc: 218103808 data_used: 51
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:11.994896+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fcff5000/0x0/0x4ffc00000, data 0x11442a4/0x11d5000, compress 0x0/0x0/0x0, omap 0x8942, meta 0x1a276be), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 23994368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:12.995122+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 86 ms_handle_reset con 0x55691f315400 session 0x55692196fc00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 23961600 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:13.995268+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556921a32c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 86 ms_handle_reset con 0x556921a32c00 session 0x5569220b16c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 23937024 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:14.995428+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 86 ms_handle_reset con 0x556923da7400 session 0x55692196ee00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 87 ms_handle_reset con 0x55691f315400 session 0x556920824c40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 23879680 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:15.995564+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fcfef000/0x0/0x4ffc00000, data 0x1146d3d/0x11db000, compress 0x0/0x0/0x0, omap 0x8aa1, meta 0x1a2755f), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 88 ms_handle_reset con 0x556923da7800 session 0x556922068380
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 88 ms_handle_reset con 0x556923da7c00 session 0x55691fc49880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 706356 data_alloc: 218103808 data_used: 51
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 23289856 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:16.995698+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.869216919s of 10.003292084s, submitted: 89
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 89 ms_handle_reset con 0x556923dc0000 session 0x55691fc48380
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 23175168 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:17.995845+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 23166976 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:18.995998+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fbe45000/0x0/0x4ffc00000, data 0x114b297/0x11e3000, compress 0x0/0x0/0x0, omap 0x801b, meta 0x2bc7fe5), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 90 ms_handle_reset con 0x55691f315400 session 0x55692208c8c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 22298624 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:19.996160+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 22249472 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:20.996346+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 91 ms_handle_reset con 0x556923da7800 session 0x556920784380
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713425 data_alloc: 218103808 data_used: 133
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 22257664 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 91 ms_handle_reset con 0x556923da7400 session 0x556920784fc0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:21.996522+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc0c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 22102016 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:22.996659+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 92 ms_handle_reset con 0x556923da7c00 session 0x55691fc49500
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fbe44000/0x0/0x4ffc00000, data 0x114e056/0x11e8000, compress 0x0/0x0/0x0, omap 0x7b07, meta 0x2bc84f9), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 92 ms_handle_reset con 0x556923dc0c00 session 0x55691fc49a40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 20914176 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:23.996814+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 93 ms_handle_reset con 0x55691f315400 session 0x55692196ec40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 20889600 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:24.997080+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 93 ms_handle_reset con 0x556923da7400 session 0x55691fc48000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 20873216 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:25.997279+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 94 ms_handle_reset con 0x556923da7800 session 0x55691fc496c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 721609 data_alloc: 218103808 data_used: 4178
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 20799488 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:26.997427+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.220775604s of 10.000844002s, submitted: 160
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 95 ms_handle_reset con 0x556923da7c00 session 0x5569220eac40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 20676608 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feab800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:27.997578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 96 ms_handle_reset con 0x55691feab800 session 0x55692001b880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:28.997709+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0x1155203/0x11f6000, compress 0x0/0x0/0x0, omap 0x15410, meta 0x2bbabf0), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:29.997850+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:30.997990+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 733078 data_alloc: 218103808 data_used: 4178
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 20463616 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x55691f315400 session 0x556922069a40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7800 session 0x5569220b1340
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7400 session 0x55692208cfc0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7c00 session 0x5569220b1c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feaa800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x55691feaa800 session 0x55691ff636c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:31.998142+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x55691f315400 session 0x5569220eb500
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7400 session 0x5569220b1880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0x11566fb/0x11f9000, compress 0x0/0x0/0x0, omap 0x15697, meta 0x2bba969), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 ms_handle_reset con 0x556923da7800 session 0x55691ff44a80
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 20283392 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:32.998302+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 98 ms_handle_reset con 0x556923da7c00 session 0x55692208dc00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 98 ms_handle_reset con 0x55691feabc00 session 0x556920784e00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 98 ms_handle_reset con 0x55691f315400 session 0x55692208c1c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 20373504 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:33.998454+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923da7c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 20348928 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:34.998620+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 20348928 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:35.998731+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 737260 data_alloc: 218103808 data_used: 4690
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0x1157c0f/0x11fd000, compress 0x0/0x0/0x0, omap 0x15a06, meta 0x2bba5fa), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 20332544 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:36.998852+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 20332544 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:37.999002+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.225811958s of 11.280440331s, submitted: 55
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 99 ms_handle_reset con 0x556922501800 session 0x556922069180
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 20275200 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:38.999132+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 100 ms_handle_reset con 0x556922501400 session 0x556920784700
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 100 ms_handle_reset con 0x556922501000 session 0x55691ff62540
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556923dc3c00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 19996672 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 100 ms_handle_reset con 0x556923dc3c00 session 0x556920824380
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f315400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:39.999388+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 101 ms_handle_reset con 0x55691f315400 session 0x55691ff63880
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:40.999550+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fbe24000/0x0/0x4ffc00000, data 0x115bcad/0x1206000, compress 0x0/0x0/0x0, omap 0x16214, meta 0x2bb9dec), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745564 data_alloc: 218103808 data_used: 5202
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:41.999725+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:42.999849+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 19988480 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:44.000000+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 102 ms_handle_reset con 0x556922501000 session 0x5569220a01c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 19857408 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fbe24000/0x0/0x4ffc00000, data 0x115bcad/0x1206000, compress 0x0/0x0/0x0, omap 0x16214, meta 0x2bb9dec), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:45.000139+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556922501400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 19865600 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fbe21000/0x0/0x4ffc00000, data 0x115d296/0x1209000, compress 0x0/0x0/0x0, omap 0x16649, meta 0x2bb99b7), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:46.000260+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 103 ms_handle_reset con 0x556922501400 session 0x556922069500
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751112 data_alloc: 218103808 data_used: 5202
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 19824640 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:47.000390+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 103 ms_handle_reset con 0x556923da7800 session 0x556922036700
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 103 ms_handle_reset con 0x556923da7c00 session 0x5569220b0e00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fbe1e000/0x0/0x4ffc00000, data 0x115e8d3/0x120c000, compress 0x0/0x0/0x0, omap 0x16a68, meta 0x2bb9598), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 19824640 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:48.007385+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 19808256 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:49.007522+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.630190849s of 10.746520996s, submitted: 77
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 105 ms_handle_reset con 0x55691feabc00 session 0x5569208248c0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe19000/0x0/0x4ffc00000, data 0x115fd9f/0x120f000, compress 0x0/0x0/0x0, omap 0x16d54, meta 0x2bb92ac), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:50.007688+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:51.007932+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 756035 data_alloc: 218103808 data_used: 5202
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:52.008307+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:53.008449+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:54.008601+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 105 ms_handle_reset con 0x556923dc0400 session 0x55692208c000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 105 ms_handle_reset con 0x556923dc0800 session 0x55691fc2fc00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x5569208a2800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 106 ms_handle_reset con 0x5569208a2800 session 0x55692211b340
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:55.008793+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0x116286c/0x1213000, compress 0x0/0x0/0x0, omap 0x172b8, meta 0x2bb8d48), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 19800064 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691fee1400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 106 ms_handle_reset con 0x55691fee1400 session 0x55691fea4700
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:56.008950+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691feabc00
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757552 data_alloc: 218103808 data_used: 5151
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 19931136 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:57.009073+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 107 ms_handle_reset con 0x55691feabc00 session 0x55691fc48c40
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 107 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0x1163e28/0x1213000, compress 0x0/0x0/0x0, omap 0x176ab, meta 0x2bb8955), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 19914752 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:58.009241+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0x1163e28/0x1213000, compress 0x0/0x0/0x0, omap 0x176ab, meta 0x2bb8955), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:53:59.009393+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:00.009578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0x11652f4/0x1216000, compress 0x0/0x0/0x0, omap 0x1799e, meta 0x2bb8662), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:01.009771+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763008 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:02.010007+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0x11652f4/0x1216000, compress 0x0/0x0/0x0, omap 0x1799e, meta 0x2bb8662), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 19906560 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:03.010233+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 19898368 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:04.010360+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0x11652f4/0x1216000, compress 0x0/0x0/0x0, omap 0x1799e, meta 0x2bb8662), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.974781036s of 15.065981865s, submitted: 76
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:05.010490+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:06.010654+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:07.010737+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:08.010900+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:09.010997+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:10.011142+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:11.011329+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:12.011527+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:13.011708+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:14.011807+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:15.011942+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:16.012088+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:17.012264+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:18.012447+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:19.012622+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:20.012811+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:21.013084+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:22.013397+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:23.013542+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:24.013940+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:25.014073+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:26.014235+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:27.014361+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:28.014502+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:29.014659+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:30.014824+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:31.014995+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:32.015523+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:33.015670+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:34.015864+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:35.016013+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:36.016216+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:37.016429+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:38.016643+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:39.016831+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:40.017162+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:41.017405+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:42.017753+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:43.017946+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:44.018120+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:45.018267+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:46.018469+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:47.018622+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:48.018760+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:49.018946+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:50.019096+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:51.019244+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:52.019427+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:53.019566+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:54.019823+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:55.019956+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:56.020094+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:57.020304+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:58.020508+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:54:59.020674+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:00.020809+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:01.020982+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:02.021250+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:03.021450+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:04.021609+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:05.021803+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:06.021946+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:07.022079+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:08.022247+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:09.022443+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:10.022688+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:11.022868+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:12.023171+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:13.023399+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:14.023592+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:15.023742+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:16.023908+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:17.024081+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:18.024272+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:19.024409+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:20.024591+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:21.024760+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:22.024988+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:23.025144+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:24.025323+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:25.025508+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:26.025660+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:27.025828+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:28.026001+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:29.026148+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:30.026507+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:31.027129+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 19939328 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:32.027425+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 19873792 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config diff' '{prefix=config diff}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:33.027596+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config show' '{prefix=config show}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 19333120 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:34.027731+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 19537920 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:35.027887+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 19537920 heap: 94175232 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'log dump' '{prefix=log dump}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:36.028044+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'perf dump' '{prefix=perf dump}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'perf schema' '{prefix=perf schema}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:37.028303+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:38.028481+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:39.028717+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:40.028871+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:41.029033+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:42.029311+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:43.029412+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:44.029578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:45.029734+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:46.029863+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:47.030020+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:48.030166+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:49.030312+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:50.030427+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:51.030555+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:52.030693+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:53.030815+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:54.030959+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:55.031104+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:56.031279+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:57.032081+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:58.032228+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:55:59.032392+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:00.032557+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:01.032700+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:02.032838+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:03.032996+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:04.033138+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:05.033319+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:06.033505+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:07.033697+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:08.033874+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:09.034054+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:10.034247+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:11.034452+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:12.034719+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:13.034997+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:14.035261+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:15.035455+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:16.035636+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:17.035885+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:18.036053+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:19.036278+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:20.036427+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:21.036623+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:22.036996+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:23.037174+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:24.037404+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:25.037659+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:26.037957+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:27.038334+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:28.038542+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:29.038740+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:30.039131+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:31.039413+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:32.039775+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:33.040073+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:34.040321+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:35.040598+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:36.040941+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:37.041318+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:38.041574+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:39.041781+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:40.042067+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:41.042523+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:42.042755+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:43.043003+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:44.043305+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:45.043554+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:46.043808+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:47.044068+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:48.044301+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:49.044558+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:50.044837+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:51.045109+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:52.045467+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:53.045787+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:54.046095+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:55.046360+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:56.046646+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:57.046943+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:58.047220+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:56:59.047433+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:00.048364+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:01.048597+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:02.048831+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 30654464 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:03.048997+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 30646272 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:04.049293+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 30646272 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:05.049699+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 30646272 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:06.050076+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 30646272 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:07.050337+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 30646272 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:08.050507+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 30646272 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:09.050802+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 30646272 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:10.050986+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:11.051341+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:12.051835+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:13.051950+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:14.052155+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:15.052430+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:16.052610+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:17.052856+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:18.053039+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:19.053368+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:20.053583+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 30638080 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:21.053808+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: no keepalive since 2025-12-01T20:57:51.053893+0000 (2106-02-07T06:28:15.999895+0000 seconds), reconnecting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _reopen_session rank -1
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _add_conns ranks=[0]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): picked mon.compute-0 con 0x55691fee9800 addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): start opening mon connection
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): _renew_subs
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): get_auth_request con 0x55691fee9800 auth_method 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): _init_auth method 2
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): _init_auth already have auth, reseting
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): handle_auth_reply_more payload 9
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): handle_auth_reply_more payload_len 9
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient(hunting): handle_auth_done global_id 14195 payload 293
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _finish_hunting 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: found mon.compute-0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _finish_auth 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:21.056050+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_monmap mon_map magic: 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient:  got monmap 1 from mon.compute-0 (according to old e1)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: dump:
                                           epoch 1
                                           fsid dcf60a89-bba0-58b0-a1bf-d4bde723201b
                                           last_changed 2025-12-01T20:31:09.927398+0000
                                           created 2025-12-01T20:31:09.927398+0000
                                           min_mon_release 20 (tentacle)
                                           election_strategy: 1
                                           0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_config config(9 keys)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: set_mon_vals no callback set
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc handle_mgr_map Got map version 9
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:25.555287+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:26.555496+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:27.555707+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:28.555833+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:29.556041+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:30.556247+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:31.556488+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:32.556766+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:33.556974+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:34.557222+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:35.557491+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:36.557766+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 30580736 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:37.558025+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:38.558301+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:39.558548+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:40.558811+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:41.559066+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:42.559304+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:43.559517+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:44.559861+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:45.560083+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:46.560336+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:47.560595+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:48.560819+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:49.561020+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:50.561368+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:51.561610+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 30572544 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:52.561932+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:53.562239+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:54.562458+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:55.562686+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:56.562850+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:57.563066+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:58.563260+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:57:59.563438+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:00.563640+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:01.563888+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:02.564109+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:03.564274+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:04.564420+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:05.564634+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:06.565029+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:07.565225+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:08.565375+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:09.565499+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:10.565655+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:11.565824+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:12.566021+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:13.566213+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:14.566458+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:15.566685+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:16.567530+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:17.567877+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:18.568849+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:19.569612+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:20.570083+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:21.570333+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:22.570800+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:23.570969+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:24.571380+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:25.571577+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:26.571787+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:27.571929+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:28.572156+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:29.572551+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 30564352 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:30.572808+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:31.573133+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:32.573523+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:33.573677+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:34.573996+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:35.574248+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:36.574464+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:37.574683+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:38.574912+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:39.575120+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:40.575347+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:41.575605+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:42.575934+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 30556160 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:43.576151+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:44.576420+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:45.576676+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:46.576860+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:47.577002+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:48.577299+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:49.577597+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:50.578484+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:51.578682+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:52.578974+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:53.579130+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:54.579310+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:55.579547+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:56.579809+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:57.580007+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:58.580294+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:58:59.580472+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:00.580747+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:01.580935+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:02.581302+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:03.581537+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:04.581756+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:05.581953+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:06.582099+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:07.582297+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:08.582527+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:09.582769+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:10.583041+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:11.583218+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:12.583388+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:13.583531+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:14.583734+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:15.585066+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:16.585339+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:17.585495+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:18.585679+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:19.585857+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:20.586207+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 30547968 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:21.586360+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:22.586581+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:23.586759+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:24.586925+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:25.587073+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:26.587269+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:27.587432+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:28.587618+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:29.587769+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:30.587910+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:31.588054+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:32.588283+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:33.588448+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:34.588566+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 30539776 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:35.588684+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:36.588834+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:37.589067+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:38.589253+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:39.589506+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:40.589677+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:41.589830+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:42.590080+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:43.590245+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:44.590408+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:45.590630+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:46.590766+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:47.590941+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 30531584 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:48.591158+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:49.591416+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:50.591672+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:51.591810+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:52.591987+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:53.592259+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:54.592386+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:55.592652+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:56.592831+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:57.592967+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:58.593171+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T20:59:59.593405+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:00.593655+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:01.593830+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:02.594109+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 30523392 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:03.594268+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:04.594460+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:05.594746+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:06.594935+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:07.595122+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:08.595350+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:09.595590+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:10.595846+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:11.596035+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:12.596257+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 30515200 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:13.596490+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:14.596738+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 30507008 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:15.596958+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:16.597123+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:17.597286+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:18.597543+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:19.597682+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:20.597832+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:21.598032+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:22.598257+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:23.598436+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:24.598641+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:25.598855+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:26.599071+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:27.599269+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 30498816 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:28.599410+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 30490624 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:29.599560+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 30490624 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:30.599785+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 30490624 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:31.599995+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 30490624 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:32.600347+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 30490624 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:33.600537+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:34.600708+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:35.600900+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:36.601076+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:37.601236+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:38.601380+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:39.601587+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:40.601815+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:41.602013+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:42.602225+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:43.602411+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:44.602619+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:45.602774+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:46.602969+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:47.603118+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:48.603238+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:49.603455+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:50.603661+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:51.603789+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:52.604035+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:53.604269+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:54.604501+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:55.604686+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:56.604869+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 30482432 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:57.605120+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 30474240 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:58.605350+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 30466048 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:00:59.605515+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 30466048 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:00.605886+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 30466048 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:01.606161+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 30466048 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:02.606578+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 30466048 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:03.606911+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 30457856 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:04.607128+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 30457856 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:05.607484+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 30457856 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:06.607680+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 30457856 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:07.607930+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 30457856 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:08.608275+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 30457856 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:09.608513+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:10.608824+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:11.609057+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:12.609298+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:13.609546+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:14.609798+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:15.609998+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:16.610297+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:17.610575+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 30449664 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:18.610811+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:19.611052+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:20.611265+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:21.611450+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:22.611668+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:23.611826+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:24.612029+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:25.612217+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:26.612403+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:27.612519+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:28.612687+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:29.612825+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:30.613010+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:31.613225+0000)
Dec 01 21:03:52 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:32.613458+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:33.613653+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:34.613841+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:35.614056+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:36.614321+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:37.614459+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:38.614625+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:39.614815+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:40.615008+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:41.615257+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:42.615465+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 30441472 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:43.615670+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:44.615944+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:45.616164+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:46.616404+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5601 writes, 23K keys, 5601 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5601 writes, 998 syncs, 5.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1244 writes, 3763 keys, 1244 commit groups, 1.0 writes per commit group, ingest: 2.08 MB, 0.00 MB/s
                                           Interval WAL: 1244 writes, 544 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:47.616658+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:48.616939+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:49.617213+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:50.617467+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 30433280 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:51.617656+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 30425088 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:52.617961+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 30425088 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc ms_handle_reset ms_handle_reset con 0x55691f5c6800
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2943709997
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2943709997,v1:192.168.122.100:6801/2943709997]
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: get_auth_request con 0x55691feab800 auth_method 0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: mgrc handle_mgr_configure stats_period=5
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:53.618249+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 30253056 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:54.618421+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 30253056 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:55.618613+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 30253056 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:56.618825+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 30253056 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:57.619037+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 ms_handle_reset con 0x55691f5c6400 session 0x5569220b0a80
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x556920cb5000
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:58.619260+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:01:59.619487+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:00.619679+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:01.619782+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:02.619987+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:03.620256+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:04.620396+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:05.620542+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:06.620670+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:07.620858+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:08.620995+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:09.621088+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:10.621305+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:11.621457+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:12.621679+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:13.621874+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:14.622072+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:15.622317+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:16.622481+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:17.622664+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:18.622884+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:19.623064+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:20.623270+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 30244864 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:21.623502+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:22.623765+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:23.623922+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:24.624059+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:25.624291+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:26.624522+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:27.624752+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:28.624971+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:29.625150+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:30.625317+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:31.625540+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:32.625862+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:33.626087+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 ms_handle_reset con 0x55691fee1c00 session 0x556920825180
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: handle_auth_request added challenge on 0x55691f5c6400
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:34.626292+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:35.626553+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:36.626711+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:37.626863+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:38.627059+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:39.627362+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:40.627585+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:41.627884+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:42.628275+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:43.628569+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:44.628813+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:45.629012+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:46.629364+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:47.629677+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:48.629901+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:49.630143+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:50.630496+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:51.630768+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:52.631118+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:53.631414+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:54.631657+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:55.631996+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:56.632352+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:57.632605+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:58.632823+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:02:59.633051+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:00.633314+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:01.633452+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:02.633585+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:03.633751+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:04.633957+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:05.634166+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:06.634362+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:07.634477+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:08.634582+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:09.634721+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:10.634868+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:11.634995+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:12.635172+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:13.635321+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:14.635561+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:15.635792+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:16.635933+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:17.636136+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0x11667a4/0x1219000, compress 0x0/0x0/0x0, omap 0x17d5a, meta 0x2bb82a6), peers [1,2] op hist [])
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:18.636357+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 01 21:03:52 compute-0 ceph-osd[86634]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 01 21:03:52 compute-0 ceph-osd[86634]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765782 data_alloc: 218103808 data_used: 5085
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 30236672 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:19.636558+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config diff' '{prefix=config diff}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config show' '{prefix=config show}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter dump' '{prefix=counter dump}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter schema' '{prefix=counter schema}'
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 30089216 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:20.636814+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 30064640 heap: 105218048 old mem: 2845415832 new mem: 2845415832
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: tick
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_tickets
Dec 01 21:03:52 compute-0 ceph-osd[86634]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-01T21:03:21.637045+0000)
Dec 01 21:03:52 compute-0 ceph-osd[86634]: do_command 'log dump' '{prefix=log dump}'
Dec 01 21:03:53 compute-0 ceph-mon[75880]: from='client.15118 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:53 compute-0 ceph-mon[75880]: from='client.15120 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:53 compute-0 ceph-mon[75880]: from='client.15122 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:53 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:53 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15130 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:53 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:53 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 01 21:03:53 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/993391713' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 01 21:03:54 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15136 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:54 compute-0 ceph-mon[75880]: from='client.15124 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:54 compute-0 ceph-mon[75880]: from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:54 compute-0 ceph-mon[75880]: pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:54 compute-0 ceph-mon[75880]: from='client.15130 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:54 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/993391713' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 01 21:03:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 01 21:03:54 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643901992' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 01 21:03:54 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15140 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:54 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 01 21:03:54 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906927124' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 21:03:55 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 01 21:03:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190469854' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 01 21:03:55 compute-0 ceph-mon[75880]: from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:55 compute-0 ceph-mon[75880]: from='client.15136 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:55 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/643901992' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 01 21:03:55 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/906927124' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 01 21:03:55 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1190469854' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 01 21:03:55 compute-0 systemd[1]: Starting Hostname Service...
Dec 01 21:03:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 01 21:03:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 01 21:03:55 compute-0 systemd[1]: Started Hostname Service.
Dec 01 21:03:55 compute-0 nova_compute[244568]: 2025-12-01 21:03:55.957 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:55 compute-0 nova_compute[244568]: 2025-12-01 21:03:55.958 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 01 21:03:55 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 01 21:03:55 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340826157' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 01 21:03:56 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15152 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:56 compute-0 ceph-mon[75880]: from='client.15140 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 01 21:03:56 compute-0 ceph-mon[75880]: pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:56 compute-0 ceph-mon[75880]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 01 21:03:56 compute-0 ceph-mon[75880]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 01 21:03:56 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2340826157' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 01 21:03:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 01 21:03:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881935567' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 01 21:03:57 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:57 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 01 21:03:57 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046974336' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 01 21:03:57 compute-0 nova_compute[244568]: 2025-12-01 21:03:57.983 244572 DEBUG oslo_service.periodic_task [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 01 21:03:57 compute-0 nova_compute[244568]: 2025-12-01 21:03:57.984 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 01 21:03:58 compute-0 nova_compute[244568]: 2025-12-01 21:03:58.278 244572 DEBUG nova.compute.manager [None req-b7c6d674-c553-4145-b551-a25622d8b9ec - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 01 21:03:58 compute-0 ceph-mon[75880]: from='client.15152 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:03:58 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/2881935567' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 01 21:03:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 01 21:03:58 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776650288' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 01 21:03:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 01 21:03:58 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 01 21:03:58 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145783326' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 01 21:03:59 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:03:59 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15162 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:04:00 compute-0 ceph-mon[75880]: pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:04:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3046974336' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 01 21:04:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/3776650288' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 01 21:04:00 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1145783326' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 01 21:04:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 01 21:04:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419735046' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 01 21:04:00 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 01 21:04:00 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519393611' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 01 21:04:01 compute-0 ceph-mon[75880]: pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:04:01 compute-0 ceph-mon[75880]: from='client.15162 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:04:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/419735046' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 01 21:04:01 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1519393611' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 01 21:04:01 compute-0 ceph-mgr[76174]: log_channel(cluster) log [DBG] : pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:04:01 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15168 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:04:01 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 01 21:04:01 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218230441' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 01 21:04:02 compute-0 ceph-mon[75880]: pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec 01 21:04:02 compute-0 ceph-mon[75880]: from='client.15168 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:04:02 compute-0 ceph-mon[75880]: from='client.? 192.168.122.100:0/1218230441' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 01 21:04:02 compute-0 ceph-mgr[76174]: log_channel(audit) log [DBG] : from='client.15172 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 01 21:04:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 01 21:04:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268211566' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 01 21:04:02 compute-0 ceph-mon[75880]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 01 21:04:02 compute-0 ceph-mon[75880]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268211566' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
